Prohibited artificial intelligence practices in the legislation of the European Union

Бесплатный доступ

The article is devoted to the study of the norms of the European Union law on Artificial Intelligence in terms of unacceptable risks of artificial intelligence. Attention is drawn to the fact that the main idea of the European approach is to support the development of trustworthy artificial intelligence, in connection with which the emphasis is placed on a risk-based approach. It is indicated that unacceptable risk is one of the levels of such an approach, which also includes artificial intelligence systems with minimal, limited and high risk. The use of unacceptable risk artificial intelligence systems refers to actions prohibited by the legislation of the European Union, except for their use in a number of circumstances specified in the law. Methods affecting the subconscious mind are considered; manipulative and misleading methods; methods using the vulnerability of a person or group of persons; methods of biometric identification; predictive methods based on profiling; methods for determining emotions. The exceptions allowed by law in relation to artificial intelligence systems of unacceptable risk are analyzed. It is concluded that the lack of broad regulatory regulation of artificial intelligence in Russia cannot be unequivocally recognized as a negative factor, since the legislator has the opportunity to analyze the results of legal approaches to regulation in foreign countries. It is concluded that in the Russian Federation there is no need to adopt a law specifically devoted to artificial intelligence. The relevant norms may well be implemented in the current industry legislation.

Еще

Artificial intelligence, the law of the european union on artificial intelligence, legal regulation of artificial intelligence technologies, comparative law

Короткий адрес: https://sciup.org/14131469

IDR: 14131469   |   DOI: 10.47475/2311-696X-2024-42-3-107-112

Статья научная