Explainable Artificial Intelligence and Legal Ethos: Developing Key Performance Indicators for ‘G20 Giants’

Автор: Bhatt N., Bhatt J. N.

Журнал: Journal of Digital Technologies and Law @lawjournal-digital

Статья в выпуске: 3 (4), 2025 года.

Бесплатный доступ

Objective: to study the “right to explanation” in the context of the PEEC doctrine (public interest, environmental sustainability, economic development, criminal justice) in order to develop key performance indicators reflecting the socio-cultural characteristics of different countries and ensuring adaptability, transparency and cultural relevance in the regulation of explainable artificial intelligence. Methods: the research uses a unique methodological approach that combines the iterative processes of soft systems methodology with a theoretical framework based on the PEEC principles. Such integration makes it possible to comprehensively study the social, economic, political and legal regimes of the ‘G20 Giants’ – the United States of America, the Federal Republic of Germany, Japan, the Republic of India, the Federal Republic of Brazil and the Russian Federation – when designing key performance indicators. The proposed key performance indicators are applicable to assess the transparency and accountability of artificial intelligence systems, simplifying data collection and practical implementation in various cultural contexts. The developed model corresponds to the actual social needs in decision-making using artificial intelligence technologies. Results: the study proposes a new legal model for regulating explainable artificial intelligence based on a system of key performance indicators. In addition to eliminating the problems of regulating explainable artificial intelligence in various cultural, ethical and legal fields, this model ensures that the system of regulating explainable artificial intelligence properly takes into account anthropocentric aspects, since it is focused on unlocking the true potential of artificial intelligence. The proposed approach promotes the most effective use of artificial intelligence technologies for the benefit of society in the perspective of sustainable development. Scientific novelty: the work applies a unique scientific approach that takes into account cultural, ethical, socio-economic and legal differences when developing a legal framework for regulating explainable artificial intelligence. This allows adapting the legal framework to various national conditions, while contributing to responsible management of artificial intelligence with a check-and-balance system. Practical significance: the results obtained make it possible to use the proposed legal model in the practical activities of government agencies and developers of artificial intelligence systems to ensure transparency and explainability of technologies. Effective adjustment of the proposed key performance indicators, taking into account the specifics of states, will optimize them for universal use. Although all five key performance indicators are relevant for the ‘G20 Giants’, their relative significance depends on the socio-cultural and legal conditions of a particular state. Further research should cover a wider range of issues, including other developed and developing countries, in order to adapt the regulation of explainable artificial intelligence to various national and global requirements.

Еще

Algorithmic transparency, artificial intelligence, criminal justice, digital technologies, economic development, environmental sustainability, ethics, explainable artificial intelligence, law, public interest

Короткий адрес: https://sciup.org/14134232

IDR: 14134232   |   УДК: 34:004:340.1:004.8:004.051   |   DOI: 10.21202/jdtl.2025.26