What legal challenges does the rapid development of neural networks and artificial intelligence technologies pose to the state and business? We discuss this with Valery Sidorenko, CEO of the digital agency "Interium" and head of the working group on developing an approach to regulating deepfakes at the Public Council under the Ministry of Digital Development of Russia.
In one of our previous articles, we examined five key digital trends changing the communications industry. One of them was legal evolution—a topic to which it is logical to return in an applied context.
Today, artificial intelligence technologies have penetrated all key areas of life: from medicine and transport to public services and creative industries. But despite the speed of implementation, the regulatory system is not keeping up with this development. The legal framework remains fragmented, approaches are diverse, and responsibility is blurred.
Is it necessary to regulate AI? The answer is obvious: yes. The question is how to build regulation so that it does not hinder the development of technology, but at the same time sets clear and sustainable frameworks?
Why regulation is necessary
State regulation of neural networks today should solve at least two key tasks.
First—minimizing risks and abuses. One of the most obvious examples is the spread of deepfakes. These technologies are used not only for entertainment purposes but also in fraud and manipulation. In Russia, there are known cases where deepfakes were used to forge the images of famous people and officials. One such case: repeated attempts at telephone fraud on behalf of the Mayor of Moscow using a synthesized voice.
At the same time, a deepfake is not synonymous with crime. The technology is actively used in the film industry, marketing, advertising, etc. It is a full-fledged tool with a wide range of legitimate uses. Regulation should not turn into a ban—its task is to set the boundaries of what is permissible, distinguish between harmful and neutral scenarios, without blocking potential.
The second key task is to create a framework for those areas where AI is used massively and without obvious risks, but is practically not covered by current legislation. Here, neural networks perform tasks of analysis, decision-making, personalization of services, but neither the status of such decisions, nor the distribution of responsibility, nor the protection of the rights of participants in these processes are recorded in the legal field. This creates legal uncertainty, which in the event of a conflict will be resolved post factum, in the absence of norms.
Regulation models
Approaches to the regulation of neural networks in the world have already managed to form several conditional models, differing in the degree of state intervention, the structure of requirements, and the logic of legal control. These differences are important to consider in the national discussion—not for mechanical copying, but for understanding the range of possible solutions.
There are models based on strict regulatory control. In them, AI technologies are considered as sources of potential harm, and therefore are subject to preliminary licensing and restrictions on the types of applications. This approach is aimed at reducing systemic risks and protecting public interests, but is often accompanied by reduced flexibility and barriers to technological business. In China, AI regulation includes mandatory licensing of algorithms, control over content generated by AI, and strict data protection requirements. For example, companies are required to obtain regulatory approval before launching generative models to minimize the risks of misinformation and leaks.
At the other pole are models with a minimum level of formalized regulation, where the main burden of responsibility lies on industries, platforms, and the developers themselves. Here, the state moves away from a directive role and stimulates self-regulation, establishing only basic principles (for example, the inadmissibility of discrimination or the obligation to disclose information). Such systems adapt faster to changes but are worse at coping with situations where direct intervention is necessary—for example, in cases of violations of citizens' rights or abuses in the public sphere. In the USA, there is no single federal law on AI, regulation is fragmented and relies on industry standards (for example, NIST AI Risk Management Framework) and voluntary obligations of companies. The main emphasis is on self-regulation and market competition, although individual states are introducing local norms, for example, on the use of AI in hiring.
Between these extremes, an intermediate class of hybrid models is formed. Their essence is in combining mandatory requirements (for transparency of algorithms or protection of personal data) with opportunities for experimentation, development, and industry adaptation. Such approaches rely not only on laws but also on technical standards, ethical codes, "regulatory sandboxes," and other forms of flexible legal tools. The EU is developing the AI Act—a comprehensive law that classifies AI systems by risk levels. High-risk systems (for example, in medicine or law enforcement) are subject to strict regulation, including certification and audit. Low-risk systems allow greater freedom, and regulators support innovation there. South Korea combines soft legislative frameworks with support for innovation. The AI Basic Act establishes basic requirements for transparency and security but focuses on the development of AI through government investment and partnerships with the private sector, including pilot projects in healthcare and education.
The choice of model depends on many factors—the maturity of the technological sector, the level of legal culture, the state of the judicial system, and the priorities of state policy. If we talk about Russia—the task is not to choose a "ready-made template" but to determine our own trajectory: where strictness is needed, and where there is room for growth. AI requires not a universal law but an architecture in which control, stimulation, and responsibility will coexist.
Key areas of regulation
Modern AI regulation should cover several key areas in which real conflicts of interest, legal uncertainties, and systemic gaps already arise today. The most acute issues are responsibility and handling the results of AI work—both at the product level and at the data level.
The first block is responsibility for the actions of AI. With the spread of autonomous systems, the question arises more and more often: who is responsible if the neural network made a mistake, and this caused damage? Possible parties: developer, operator, user—all are connected to the system, but legislation does not yet establish a clear area of responsibility for any of them. To avoid a legal vacuum, mechanisms for the distribution of responsibility are needed.
The second direction is related to copyright and data. It covers three independent legal problems.
Firstly, this is the status of works created with the help of AI. Is the product intellectual property? And if so, who does it belong to—the user, the model developer, the platform?
Secondly, this is the legality of using data for training. Most neural networks are trained on someone else's content: texts, images, videos—some of which are protected by copyright. The use of such data without consent or licensing causes claims from both individual authors and industries.
And, thirdly, this is the problem of the availability of the data itself. Even if there is a legal basis, many valuable arrays are simply not digitized, legally restricted, or technically unsuitable.
Without solving these issues, AI regulation will constantly face conflicts at the input and output—from development to use.
AI Regulation in Russia
Russia is among the countries with a high level of digital development: neural networks are already actively being introduced into services, business practices, and public communications. At the same time, the formation of a sustainable legal framework is only beginning. A key role in the formation of regulation is played by expert groups created with the participation of the State Duma of the Russian Federation, Roskomnadzor (RKN), the Federal Antimonopoly Service (FAS), and the Ministry of Digital Development, Communications and Mass Media (Ministry of Digital Development). These departments are jointly working on issues of responsibility, transparency of algorithms, and the distribution of rights between participants in the AI ecosystem.
One of the steps towards regulation was a bill presented in the spring of 2025. The document proposes to fix in legislation the concepts of "artificial intelligence," "AI technologies," "AI systems," and also to define the roles of participants—developers, operators, and users. This is an important step towards forming a unified terminological and legal framework.
The bill also proposes the classification of AI systems by risk level—from minimal to unacceptable. Systems with a high risk (for example, used in medicine, transport, law enforcement) are proposed to be subject to mandatory registration, certification, and liability insurance. For the most dangerous scenarios—those that threaten the foundations of security or human rights—a direct ban on development and use is provided. In parallel, law enforcement agencies, including the Ministry of Internal Affairs and the FSB, are focused on tightening legislation to combat crimes related to the use of AI. This includes the development of amendments to the Criminal Code of the Russian Federation, strengthening liability for the creation and distribution of malicious content, including deepfakes, used in fraudulent schemes. Work on AI regulation continues in the format of interdepartmental interaction and consultations with industry players. The next stage is the coordination of approaches. The country is moving in this direction: without abrupt steps, but with increasing systematicity. This is not a quick process, but there is movement in it: from reaction to strategy, from disparate initiatives to architecture.
Now on home
Герой России Гарнаев: никто из профессионалов о возобновлении производства на КАЗ всерьёз не говорит
Система отслеживает спутники на высотах до 50 000 км и ведёт за ними наблюдение
The armored vehicle is equipped with a KamAZ-740.35-400 diesel engine with a power of 400 hp.
Constant improvements in avionics, weapons and tactical capabilities will make the aircraft a flexible response to future challenges
The exterior of the KamAZ-54901 features fairings on the cab and chassis for fuel economy
Fighters are in demand both domestically and abroad
Tyazhpromexport and Venezuela Agree on Plant Revival
The company not only completed the state order, but also quickly mastered the production of AK-12K for special forces
Experts have developed a photogrammetric complex with a resolution of less than 1 cm