Human vs. Artificial intelligence - EU's legal response

Автор: Marijana Mladenov

Журнал: Pravo - teorija i praksa @pravni-fakultet

Рубрика: Original scientific work

Статья в выпуске: S vol.40, 2023 года.

Бесплатный доступ

Artificial intelligence (AI) has the capacity to improve not only the individual quality of life, but also economic and social welfare. Although the AI systems have many advantages, they also pose significant risks, creating a wide range of moral and legal dilemmas. The European Union has been creating a legal framework for developing, trading, and using AI-driven products, services, and systems to reduce the risks connected with the AI systems and to prevent any possible harm they may cause. The main focus of this paper refers to the analysis of the Proposal for the Artificial Intelligence Act submitted by the European Commission in April 2021. The goal of the article is to move toward a possible resolution to the dilemma of whether the AIA proposal is appropriate for the AI era by addressing the scope of its application, the prohibited AI practices, rules on high-risk AI systems, specific transparency obligations, as well as certain regulatory gaps. The article should be viewed as an initial analysis of the AIA proposal in order to provide a useful framework for the future discussion.

Еще

Artificial intelligence, the Europan Union, regulatory framework, the Proposal for the Artificial Intelligence Act

Короткий адрес: https://sciup.org/170202117

IDR: 170202117   |   DOI: 10.5937/ptp2300032M

Текст научной статьи Human vs. Artificial intelligence - EU's legal response

©

The European Union has been creating a legal framework for developing, trading, and using AI-driven products, services, and systems to reduce the risks connected with AI systems and to prevent any possible harm they may cause.The European Parliament passed a “Resolution on Civil Law Rules on Robotics” on February 16, 2017, which specifically called for legislation on the liability of robots and AI (Resolution on Civil Law Rules on Robotics, 2017). Furthermore, the Commission adopted “Communication on Artificial Intelligence for Europe” on April 25, 2018 (Communication on Artificial Intelligence for Europe, 2018,). With the help of an expert panel, the Commission stated in this communication that it will examine if the national and EU liability frameworks are appropriate in the context of problems posed by AI. Two years later, the Commission published a package consisting of four documents,including the White Paper “On Artificial Intelligence – A European approach to excellence and trust” (Koch, 2020). In April 2021 European Commission moved ahead with the Proposal for the Artificial Intelligence Act (hereinafter: AIA proposal), which will present the main subject of the research in the paper (Proposal for the Artificial Intelligence Act, 2021).

Considering the abovementioned, the goal of the article is to move toward a possible resolution to the dilemma of whether the AIA proposal is appropriate for the AI era by addressing the scope of this act, the prohibited AI practices, rules on high-risk AI systems, specific transparency obligations as well as certain regulatory gaps.

2.    The scope of the AIA proposal

The scope of the AIA proposal is defined by the subject matter of the regulation as well as the scope of its application. Concerning the subject matter, Article 1 states thatthe AIA proposal establishes:

  • (a)    “harmonised rules for the placing on the market, the putting into service and the use of artificialintelligence systems (‘AI systems’) in the Union;

  • (b)    prohibitions of certain artificial intelligence practices;

  • (c)    specific requirements for high-risk AI systems and obligations for operators of such systems;

  • (d)    harmonised transparency rules for AI systems intended to interact with natural persons, emotionrecognition systems and biometric categorisation systems, and AI systems used to generate ormanipulate image, audio or video content;

  • (e)    rules on market monitoring and surveillance” (Proposal for Artificial Intelligence Act, 2021).

  • 3.    Prohibited uses of AI

Article 5 of the AIA proposal establishes a list of prohibited AI practices. The list of prohibited practices includes all AI systems whose use is not in accordance with fundamental European values, such as respect for fundamental human rights and freedoms.Four different types of AI are generally included under the list ofAI practices that are prohibitedunder the standards outlined in Article 5 of the AIA proposal.

The first one, “subliminal or manipulative AI practices”, is defined as one that has “ a significant potential to manipulate persons through subliminal techniquesbeyond their consciousness ” to materially modify someone’sbehaviour in a way that harms or is likely to negatively affect their physical or psychological well-being or the well-being of another person (Explanatory Memorandum of the AIA proposal, 2021, p.12). Even though the AIA proposal does not define the term “subliminal”, this phrase typically describes a perception that is below the level of awareness (Klein, 1966, p. 726). The activity’s potential to harm someonephysically or psychologically should be considered a final trigger. The scope of the provision is significantly limited by this requirement(Veale & Borgesius, 2021 p. 99).

The second type of prohibited AI is referring to the AI practices exploiting vulnerabilities of particularly vulnerable groups including children or persons with disabilitiesto materially influence a person’s behaviour in a way that harms or is likely to harm that person or another person’s physical or psychological well-being.The main aspect of this provision is vulnerability, which isnot extensively defined but only demonstrated by the examples of particularly vulnerable groups, such as children or individuals with disabilities (Neuwirth, 2022, p. 7).

The thirdcategory of prohibited AI practices, “social scoring systems”, includes systems used by “ public authorities or ontheir behalf for the evaluation or classification of the trustworthiness of natural personsover a certain period of time-based on their social behaviour or known or predictedpersonal or personality characteristics ” (Article 5 of the Proposal for the Artificial Intelligence Act, 2021). It seems that by restricting the use of social scoring to public authorities, the AIA proposal ignores the use of such systems by private businesses, especially in high-risk sectors where they may have the potential to indirectly impactfundamental rights.Various infrastructures including delivery, telecommunications, and transportation are under the authority of so-called AI companies (Rahman, 2017). Therefore, the above exclusion can have serious socioeconomic implications for individuals, which imposes the needto make this provision universally applicable.

In addition, the fact that Article 5 could not be amended by the European Commission could be quite challenging in the context of the implementation of the AIA due to the fact that some problematic aspects of AI practices can only be recognized ex-post.

4.    Rules on high-risk AI systems

For AI systems that create a high risk to human health and safety or fundamental rights, or “high-risk AI systems,” Title III of the AIA proposalestablishes a new regulatory regime with precise standards. The AIA Proposal adopts a prescriptive “list-based approach,” which outlines which systems are considered a high risk rather than defining the term itself.Based on the AI system’s intended use and current product safety regulations, a system is categorized as high-risk. As a result, the categorization of a high-risk depends not only on the task performed by the AI systembut also on the precise objectives and operating procedures of that system.

Two main groups of high-risk AI systems are identified in Title III, along with the classification criteria. Systems intended for use as safety components of products that are covered by “third-party ex-ante conformity assessment” under EU law are included in Annex II of the proposal as high-risk systems, as are other standalone AI systems used in high-risk domains (Explanatory Memorandum of the AIA proposal, 2021, p. 14).The European Commission has identified eight use categories for high-risk standaloneAI systems listed in Annex III. By using a set of criteria and a risk assessment methodology, the European Commission may expand the list of high-risk AI systems used within specified pre-defined sectors in order to ensure that the legislation may be modified to develop uses and applications of AI.However, it is important to note that the Commission can only do this if the high-risk AI systems are intended to be used in any of the activities stated in Annex III points 1 through 8. This provision could be quite challenging due to the fact, that we cannot be aware of all categories of high-risk systems sinceAI is a rapidly evolving field that is progressively influencing other industries (Smuha et al., 2021, p. 11).

In addition, Chapter 2 outlines the legal requirements for high-risk AI systems related to “ data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security ” which links to obligations of regulated actors stated within Chapter 3 (Explanatory Memorandum of the AIA proposal, 2021, p. 13). The great majority of all obligations are theresponsibility of providers.With respect to data and data governance, Article 10 of the AIA proposal mostly refers to training, validation, and testing data sets. Data quality criteria for sets of data on individuals, or groups of people (not necessarily involving personal data in GDPR terms), including “special categories of personal data” (as defined in Article 9 of GDPR) are highly detailed in the subject requirements (Regulation (EU) 2016/679).

The following requirement is referring totechnical documentation. Providers must submit technical documentationthatincludes all information in line with Annex IV. Moreover,according to Article 12 of the AIA proposal and recordkeeping requirements, providers need to facilitate logging in order to enable traceability that is acceptable for a system’s risks.Providers are only required to keep logs for the relevant period while such logs are still under their control; otherwise, users are required to do so.The standards for high-risk AI systems transparency are defined in Article 13. A high-risk AI system must be created in accordance with Article 13 in order to be “sufficiently transparent to enable users to interpret the system’s output and use it appropriately” and it must also come with instructions and information that are “relevant, accessible, and comprehensible to users” (Article 12 of the Proposal for the Artificial Intelligence Act, 2021). In addition to the standards above, Article 14 stipulates that providers must create systems that can be properly supervised by natural persons, using “human-machine interface tools” (Article 14 of the Proposal for the Artificial Intelligence Act, 2021). To ensure the protection of fundamental rights, oversight is necessary for all actions linked to the creation, implementation, and use of AI systems.Moreover, Article 15 states that high-risk AI systems must be created and constructed in such a way that, in the context of their intended use, they achieve the required level of accuracy, robustness, and cybersecurity and operate consistently over the period of their lifecycle (Article 15 of the Proposal for the Artificial Intelligence Act, 2021).

The framework for notified bodies’ participation in conformity assessment processes as independent third parties is provided in Chapter 4, while the specific conformity assessment processes that must be implemented for every type of high-risk AI system are included in Chapter 5. The approach to conformity assessment aims to reduce pressure on both notified entities and economic operators, whose capability must be gradually ramped up through time.

5.    Specific transparency obligations

Title IV of the AIA proposal outlinesspecific transparency obligations. The AIA proposal introduces transparency requirements for systems thatinteract with humansdue to the fact thatpeople have a right to know when they are engaging with a machine’s algorithm rather than a human being. Similar requirements for transparency apply to the disclosure of deep fake/ synthetics, biometric categorization, and automated emotion detection systems.Except for biometric categorization systems that are legally allowed to be used for crime prevention, users of emotion recognition or biometric categorization systems are required to notify exposed persons of the system’s operation. In comparison with data protection law, it is quite challenging to understand the contribution of this provision. Data protection law indicates that users of emotional recognition or biometric categorization systems that process personal datanotify individuals of, among other things, the existence and purposes of such processing.Therefore, it is difficult to determine what is the real scope of this provision.

In addition, specific transparency obligations are also introduced for limited-risk AI systems like chatbots. The Low-Risk AI Systems category is the only one that is excluded from transparency obligations (Kop, 2021).

6.    Identifying additional regulatory gaps of the AIA proposal

Even though the above analysis of the AIA proposal has already identified certain aspects of the Act that need further clarification, the doctrine concluded that this act has some additional gaps. The most significant one is referring to the fact that the AIA proposal does not include any individual right of enforcement. Although the Act is designed to protect fundamental rights, it has no remedies through which individuals can seek redress if the regulation is violated. The AIA proposal does not include any mechanism to allow individuals to challenge AI-driven decision-making (Ebers, 2021 p. 19).

Moreover, a European approach to AI, on the other hand, should consider not only human rights but also other priorities such as climate change and sustainability. In this respect, the AIA proposal makes no direct mention of “Green AI” or “Sustainable AI” as a clear objective of a European understanding of AI development according to the standards of the European Green Deal (Gailhofer et al. 2021). The Act only recognizes the necessity for relevant action in the high-impact field of climate change and the potential of AI to help socially and environmentally positive outcomes.

7.    Conclusion

The AIA proposal intends to establish a uniform legal system for AI in the EU. Through a comprehensive framework, the AIA proposal addressesboth the potential benefits of AI and the moral questions raised by the different threats associated with it.Nevertheless,some aspectsrequirefurther clarification.The main aspect that needs to be improved is the definition of the term “AI”. The AIA proposalincludes a quite broad definition, which increases the risk of overregulation of systems. Furthermore, the lack of individual enforcement

LAW - theory and practice Special Edition / 2023 rights in the AIA proposal underminesthe protection of fundamental rights as the most important goal of this regulation. The AIA must guarantee the right to remedy that addresses potential Regulation violations or infringements of fundamental rights.

This article cannot and has not discussed all aspects of the AIA proposal. The author has demonstrated some of the complexities of this particularly significant instrument. After all, creating a safe and adequate regulatory framework for AI in Europe is not only the way we design technology but also the way we shape our society’s future.

Mladenov Marijana

Pravni fakultet za privredu i pravosuđe u Novom Sadu, Univerzitet Privredna akademija u Novom Sadu, Srbija

LJUDSKA PROTIV VEŠTAČKEINTELIGENCIJE – PRAVNI ODGOVOR EU

REZIME: Veštačka inteligencija ima kapacitet da poboljša ne samo kvalitet života pojedinca, već i ekonomsko i socijalno blagostanje. Iako sistemi veštačke inteligencije imaju mnoge prednosti, oni takođe predstavljaju značajne rizike, stvarajući širok spektar moralnih i pravnih dilema. Evropska unija kreira pravni okvir za razvoj, trgovinu i upotrebu proizvoda, usluga isistema vođenih veštačkom inteligencijom kako bi smanjila rizike povezane sa sistemima veštačke inteligencije i sprečila svaku moguću štetu koju oni mogu da izazovu. Glavni fokus ovog rada odnosi se na analizu Predloga Uredbe o veštačkoj inteligenciji koji je Evropska komisija podnela u aprilu 2021. Cilj članka je da pruži doprinos u kontekstu razrešenja dileme da li je predlog navedene uredbe adekvatan zahtevima ere veštačke inteligencije, adresirajući obim primene ovog akta, zabranjene prakse veštačke inteligencije, pravila o visokorizičnim sistemima veštačke inteligencije, specifične obaveze transparentnosti kao i određene pravne praznine. Članak treba posmatrati kao početnu analizu predloga Uredbe o veštačkoj inteligenciji kako bi se obezbedio koristan okvir za buduću diskusiju.

Ključne reči : veštačka inteligencija, Evropska unija, regulatorni okvir, Predlog Uredbe o veštačkoj inteligenciji.

Список литературы Human vs. Artificial intelligence - EU's legal response

  • Artificial Intelligence Act. (2021). Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. EUR-Lex – 52021PC0206
  • Bjelajac, Ž., &Filipović, A. M. (2021). Specifičnosti digitalnog nasilja i digitalnog kriminala [Specific characteristics of digitaliolence and digital crime]. Pravo – teorija i praksa, 38(4), pp. 16-32. DOI: 10.5937/ ptp2104016B
  • Bogucki, A., Engler, A., Perarnaud, C., Renda, A. (2022). The AI Act and Emerging EU Digital Acquis, Overlaps, gaps and inconsistencies, CEPS. Downloaded 2022, September 23 from https://www.ceps.eu/wp-content/uploads/2022/09/CEPS-In-depth-analysis-2022-02_The-AI-Act-andemerging-EU-digital-acquis.pdf
  • Communication from the Commission to the European Parliament, the European Council, the Council, European Economic and Social Committee and the Committee of the Regions Artificial Intelligence for Europe. (2018). COM(2018) 237 final
  • Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, OJ L 119/89
  • Ebers, M. (2021). Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework. In: Colonna, L., Greenstein G. (eds.), Nordic Yearbook of Law and Informatics, (pp. 1-20). Downloaded 2022, October 15 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3901732
  • Ebers, M., Hoch, V. R., Rosenkranz, F., Ruschemeier, H., & Steinrötter, B. (2021). The European commission’s proposal for an artificial intelligence act – a critical assessment by members of the robotics and AI law society (RAILS). J – Multidisciplinary Scientific Journal, 4(4), 589-603. DOI: 10.3390/j4040043
  • European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. (2017). (2015/2103(INL))
  • Gailhofer, P., Herold, A., Schemmel, J.P., Scherf, C.-S., Urrutia, C., Köhler, A., &Braungardt, S. (2021). The Role of Artificial Intelligence in the European Green Deal, Study Requested by the AIDA Committee of the European Parliament, Study requested by the AIDA Committee. Downloaded 2022, October 15 from https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662906/IPOL_STU(2021)662906_EN.pdf
  • Gill-Pedro, E. (2021). The Most Important Legislation Facing Humanity? The Proposed EURegulation on Artificial Intelligence. Nordic Journal of European Law,4(1), pp. 4-10. Downloaded 2022, October 5 from https://journals.lub.lu.se/njel/article/view/23473/20819
  • Greenleaf, G. (2021). The ‘Brussels effect’ of the EU’s ‘AI Act’on data privacy outside Europe. Privacy Laws & Business International Report, 1, pp. 1-10. Downloaded 2022, September 25 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3898904
  • Helberger, N., & Diakopoulos, N. (2022). The European AI act and how it matters for research into AI in media and journalism. Digital Journalism, pp. 1-10. DOI: 10.1080/21670811.2022.2082505
  • Hickman, E., Petrin, M. (2021). Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective. Eur Bus Org Law Rev, 22, pp. 593–625. DOI: 10.1007/s40804-021-00224-0
  • Klein, E. (1966). A Comprehensive Etymological Dictionary of the English Language. Amsterdam:Elsevier
  • Koch, B. A. (2020). Liability for Emerging Digital Technologies: An Overview. Journal of European Tort Law, 11(2), pp. 115-136. DOI:10.1515/jetl-2020-0137
  • Kolarević, E. (2022). Uticaj vještačke inteligencije na uživanje prava na slobodu izražavanja. [The influence of Artificial intelligence on the right to freedom of expression] Pravo – teorija i praksa, 39(1), pp. 111-126. DOI: 10.5937/ptp2201111K
  • Kop, M. (2021). EU Artificial Intelligence Act: The European Approach to AI. Transatlantic Antitrust and IPR Developments, 2, 1-11. Downloaded 2022, October 15 from https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/
  • Neuwirth, R., J. (2022). Prohibited Artificial Intelligence Practices in the Proposed EU Artificial Intelligence Act. DOI: 10.2139/ssrn.4261569
  • Perucica, N., Andjelkovic, K. (2022). Is the future of AI sustainable? A case study of the European Union. Transforming Government: People, Process and Policy, 16 (3), pp. 347-358. DOI: 10.1108/TG-06-2021-0106
  • Rahman, K. S. (2017). The new utilities: Private power, social infrastructure, and the revival of the public utility concept. Cardozo L. Rev., 39, 1621-1692. Downloaded 2022, October 5 from https:// brooklynworks.brooklaw.edu/cgi/viewcontent.cgi?article=1987&context =faculty.
  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) OJ L 119/1
  • Smuha, N. A., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren, J., Piselli, R., & Yeung, K. (2021). How the EU can achieve legally trustworthy AI: a response to the European commission’s proposal for an artificial intelligence act. DOI: 10.2139/ssrn.3899991
  • Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), pp. 97-112. DOI: 10.9785/cri-2021-220402
Еще
Статья научная