The EU AI Act and Its Contractual and Educational Implications: AI, Education, and the Law of Obligations
Автор: Ratko Ivković, Dragan Vučković, Milan Ječmenić, Dolores Ananieva, Miloš Stanković, Martina Petković, Ninoslava Janković
Журнал: International Journal of Cognitive Research in Science, Engineering and Education @ijcrsee
Рубрика: Review articles
Статья в выпуске: 2 vol.13, 2025 года.
Бесплатный доступ
The European Union’s Artificial Intelligence Act (EU AI Act), adopted in 2024, and introduced the first comprehensive legal framework for regulating artificial intelligence across sectors, including education. This paper explores the dual implications of the Act—contractual and pedagogical—within the context of digital education, with a particular emphasis on IT curricula. It examines the risks of diminishing foundational knowledge in algorithmic thinking, programming, and system architecture as AI tools become more prevalent in academic settings. The paper analyzes legal obligations arising from the use of AI in education, such as transparency, informed consent, and liability, situating them within the law of obligations and contract law. It argues that AI integration in education, particularly in high-risk domains, requires more than technical compliance: it demands sustainable, ethical, and pedagogically informed implementation. Drawing on EU policies, UNESCO frameworks, and recent academic literature, the paper proposes strategic recommendations to ensure that digital education not only complies with legal standards but also upholds the core mission of higher education—critical thinking, ethical responsibility, and the cultivation of foundational knowledge.
EU AI act, digital education, IT curriculum, contractual obligations, AI literacy
Короткий адрес: https://sciup.org/170210296
IDR: 170210296 | DOI: 10.23947/2334-8496-2025-13-2-551-562
Текст научной статьи The EU AI Act and Its Contractual and Educational Implications: AI, Education, and the Law of Obligations
The rapid development of artificial intelligence (AI) in recent years has fundamentally transformed various sectors of society, including education. AI-based technologies are increasingly integrated into educational systems at all levels, promising personalized learning, automated evaluation, and real-time student support. While undeniably useful, these tools raise important questions about the sustainability of fundamental knowledge, especially in technical disciplines such as information technology (IT). In 2024, the European Union adopted the EU AI Act, the first comprehensive regulatory framework for the use of artificial intelligence across EU territory, with a special focus on the educational sector ( European Par liament “EU AI Act: First regulation on artificial intelligence”, 2024 ). Article 4 of this document explicitly mentions the importance of developing AI literacy among students, educators, and institutions, while also emphasizing the responsibility of educational providers to implement AI tools in a transparent and ethical manner ( Artificial Intelligence Act - Article 4, n.d. ). As institutions increasingly introduce tools such as ChatGPT, Copilot, GitHub Copilot, and others, one important question emerges: do students truly understand what happens “under the hood” of these tools? If students stop learning the basics of programming, algorithmic thinking, and software architecture, how will they critically use AI tools, develop new technologies, or solve complex problems? This concern is especially relevant in IT education, where a noticeable decline in student motivation to master fundamental skills has been observed ( Poth, 2023 ).

Today, many students use AI tools to assist with homework, coding tasks, and even writing term papers and theses. While such tools undeniably increase productivity and offer customized support, they also carry the real risk of diminishing foundational knowledge. According to research conducted by the University of Iowa, students increasingly perceive AI tools as substitutes for understanding rather than as supportive aids ( University of Iowa, 2023 ). This creates an educational paradox: students pass courses with AI assistance, yet graduate without a deep understanding of IT principles. The same concern is echoed in the latest report by the U.S. Department of Education, which warns that “AI must serve educational goals, not redefine them” ( U.S. Department of Education, 2023 ). Despite the growing popularity of AI tools, experts stress the importance of preserving traditional IT competencies. A piece published in Generative AI Newsroom emphasizes that foundational skills such as understanding algorithms, data structures, and object-oriented programming are not only crucial for comprehending AI systems but also for their ethical and responsible use ( Caplan, 2024 ). Moreover, according to an analysis by the Digital Education Council, the EU AI Act categorizes education as a “high-risk” domain, meaning every implementation of AI technologies in education must be carefully evaluated and justified ( Digital Education Council, 2024 ). This further confirms the need to maintain the core of IT as a discipline: understanding how systems work, not just how to use them.
Beyond the evident technological and pedagogical challenges, the implementation of AI systems in education raises a series of legal questions that can be approached through the lens of the law of obligations. The use of AI tools within educational institutions inherently involves the formation of contractual relationships-either explicit or implicit-between students, educators, institutions, and third-party service providers (e.g., AI platform vendors). Within this framework, issues such as informed consent, liability for harm resulting from inaccurate assessment or algorithmic bias, and the right to accurate information and fair treatment, fall directly within the scope of contract law. Although the EU AI Act is primarily a regulatory document, it indirectly shapes the structure and content of these contractual relationships by imposing obligations related to transparency, human oversight, and risk assessment. These obligations introduce a new layer of legal accountability: if an institution fails to adequately inform students about the role of AI in grading, or if it neglects to evaluate potential discriminatory outcomes of algorithmic decision-making, it may be liable not only under regulatory frameworks but also under civil law for breach of contractual duty. Consequently, the EU AI Act should not be viewed solely as a technical or compliance-oriented policy but also as a legal framework that redefines institutional obligations and user rights within digital education. Understanding this intersection is particularly important in the IT education context, where the risks of over-automation and loss of fundamental knowledge carry both pedagogical and legal implications.
The introduction of AI into digital education is particularly sensitive within the context of IT curricula. Unlike other fields, IT education is inherently structured around the understanding, development, and critical evaluation of digital systems themselves. Incorporating AI tools into these programs without reinforcing the underlying theoretical knowledge can lead to superficial skill acquisition and reduced long-term competence. There is a tangible risk that students may graduate with experience in AI-assisted workflows, but without the cognitive and technical foundations required to innovate, debug, or ethically evaluate those very systems. This makes the integration of AI in IT education not only a pedagogical challenge but also a matter of strategic importance for the future of the digital profession. Despite the growing integration of AI tools into educational practices and the increasing appeal of personalized digital solutions, educational institutions must carefully consider the long-term implications of such implementation. The EU AI Act not only classifies education as a high-risk domain but also explicitly calls for the responsible integration of AI systems in a way that preserves the integrity and quality of the educational process ( European Parliament, 2024 ), ( Digital Education Council, 2024 ). Within this regulatory framework, it becomes essential to emphasize that core theoretical and programming skills - such as algorithmic thinking, understanding software architecture, and basic programming literacy - cannot be replaced by automated tools, but rather constitute the foundation for their critical and ethical use. The purpose of this paper is to highlight the necessity of a balanced integration of AI in education while safeguarding space for foundational knowledge, ensuring a sustainable, quality-driven, and theoretically grounded IT curriculum aligned with current European policy frameworks.
The EU AI Act and the Education Sector
With the adoption of the EU AI Act in 2024, the European Union established the first comprehensive legal framework for the regulation of artificial intelligence across various sectors, including education. This landmark regulation is not only pioneering in scope but also in its ambition to protect fundamental rights and ensure ethical AI deployment in contexts that involve vulnerable populations, such as students and educators. The education sector has been explicitly classified as a high-risk domain due to the inherently impactful nature of decisions made through AI technologies. Automated systems used in educationranging from student assessment tools to algorithmic selection mechanisms-can significantly influence individuals’ learning trajectories, access to opportunities, and overall educational outcomes. Recognizing this, the EU AI Act mandates a transparent, safe, and ethically aligned implementation of AI systems in education. Institutions are required to exercise increased diligence when introducing AI into their pedagogical and administrative practices, particularly in areas that directly affect the rights and futures of learners. Beyond its regulatory-technological scope, the EU AI Act also invites analysis through the lens of contract law, particularly within the framework of the law of obligations. The deployment of AI systems in education gives rise to both explicit and implicit contractual relationships between educational institutions and their stakeholders-students, teachers, and third-party service providers. These relationships entail specific obligations concerning transparency, informed consent, data protection, and access to information regarding algorithmic processes. Should an institution fail to meet these standards, it may be held contractually liable for breaches such as failure to inform, data misuse, or harm resulting from algorithmic bias. In this light, the EU AI Act should not be regarded merely as a technological policy document, but rather as a multi-dimensional legal framework that redefines institutional responsibilities in a digital learning environment. It underscores the necessity of aligning digital innovation with legal accountability and ethical governance, thus safeguarding the integrity of education and the rights of its participants in the era of artificial intelligence.
Key Provisions Relevant to Education
The EU AI Act introduces a formal obligation of AI literacy for all stakeholders in the education sector, including students, teachers, administrative staff, and educational institutions. This literacy extends beyond the basic ability to operate AI tools; it encompasses an understanding of their functional capacities, technical limitations, potential risks, and broader implications for the educational process ( European Commission “AI Act | Shaping Europe’s digital future”, 2024 ). The underlying premise is that users of artificial intelligence should not remain passive recipients of automated recommendations and decisions, but should instead cultivate a critical awareness of how such systems influence their learning environments. This provision is particularly significant for the field of IT education, where students are expected not only to employ AI tools in everyday tasks, but also to comprehend their internal mechanics-ranging from machine learning models to algorithmic structures and the data architectures underpinning these systems. In this context, AI literacy also implies meta-literacy: the ability to understand how and why AI systems operate in specific ways, and how they may be applied ethically and responsibly in practice. In addition, the EU AI Act requires educational institutions that adopt AI technologies to ensure the transparent and ethical use of these systems. This includes the explicit obligation to provide students with clear and accurate information about when, where, and how AI is used in educational activities-be it in course personalization, automated knowledge assessment, candidate selection, or learning analytics ( European Parliament “EU AI Act: First regulation on artificial intelligence”, 2023 ). Transparency must be sufficient to ensure that users understand which data the system processes, how decisions are made, and what consequences these decisions may have on their educational trajectory. Moreover, institutions are mandated to conduct fundamental rights impact assessments before deploying AI systems that could have a significant effect on the educational process ( Mayer Brown, 2025 ). These assessments go beyond technical safety concerns and involve evaluations of potential risks to the right to education, equal access, nondiscrimination, and academic freedom. The implementation of AI in education must be accompanied by documented evaluations demonstrating that such systems do not infringe upon the fundamental rights of learners and educators. In the context of IT curricula, these provisions acquire even greater importance. The integration of AI tools into the classroom must be strategically planned and pedagogically grounded, in order to avoid the marginalization of foundational knowledge and core competencies in computing.
Skills such as programming, algorithmic thinking, data structure design, and software architecture comprehension remain essential to developing critical expertise and ensuring the long-term employability and adaptability of students. Consequently, educational institutions must not treat AI systems as replacements for traditional instruction, but rather as enhancements-tools that support learning without undermining the integrity of the educational process ( Digital Education Council, 2024 ).
Classification of AI Systems According to Risk
The EU AI Act establishes a four-tier risk classification system for artificial intelligence technologies, based on their potential impact on individual safety, fundamental rights, and societal values. The risk levels include: unacceptable risk, high risk, limited risk, and minimal risk. This classification plays a central role in determining the regulatory obligations applicable to different types of AI applications, including those deployed in educational settings ( Artificial Intelligence Act, n.d. ).
-
• Unacceptable Risk: This category includes AI systems that pose a clear and serious threat to individual safety or violate fundamental human rights. Such systems are strictly prohibited under EU law. Examples include AI used for mass surveillance, behavioral manipulation, social scoring, or targeting vulnerable groups without consent. In the context of education, this ban is essential to prevent the introduction of AI technologies that could result in coercive, discriminatory, or privacy-invasive learning environments.
-
• High Risk: This classification applies to AI systems that may significantly affect users’ health, safety, or fundamental rights. Within the educational context, this includes tools used for automated student selection, knowledge assessment (e.g., AI grading systems), monitoring during examinations, and personalized learning pathways that could shape educational trajectories. These systems are subject to strict regulatory requirements, such as comprehensive documentation, mandatory human oversight, robust data governance practices, and prior conformity assessments to ensure compliance with EU standards before deployment ( FeedbackFruits, 2024 ).
-
• Limited Risk: AI systems in this category do not pose substantial threats to individual rights but require a certain degree of transparency. For instance, AI-powered chatbots that provide administrative support or academic guidance fall into this group. Although relatively benign, it is still mandatory to inform users that they are interacting with an AI system.
-
• Minimal Risk: These systems carry negligible or no risk and are not subject to additional regulatory requirements. Common examples include AI-based spam filters, automatic translation tools, or content recommendation engines based on prior user interactions.
For AI systems classified as high-risk within education, the EU AI Act mandates a rigorous conformity assessment prior to implementation. This process entails identifying and analyzing potential threats to user rights, testing the system for accuracy and bias, and ensuring the presence of clear protocols for human oversight and real-time intervention when necessary. These measures are designed to guarantee that the integration of AI into education supports rather than undermines the foundational principles of transparency, accountability, and equity in learning.
Contractual Relationships in the Educational Sector in the Context of Artificial Intelligence Implementation
The digital transformation of education, propelled by the increasing integration of artificial intelligence (AI) technologies, is reshaping the legal relationships among educational institutions, students, and third parties. The deployment of AI tools in educational processes not only alters the modalities of knowledge dissemination and assessment but also raises critical issues concerning legal responsibility, transparency, and the protection of user rights. The incorporation of AI into educational settings necessitates a reevaluation of existing legal frameworks to address the complexities introduced by these technologies (Spalević and Ilić, 2024). Educational institutions must navigate the challenges of ensuring that AI applications comply with legal standards, particularly regarding data privacy, algorithmic fairness, and accountability. The dynamic nature of AI systems, which can evolve and learn over time, further complicates the assignment of liability and the enforcement of contractual obligations. Moreover, the global nature of AI technologies, often developed and maintained by international entities, introduces cross-jurisdictional legal considerations. Educational institutions must be vigilant in understanding and adhering to both local and international legal requirements to mitigate potential legal risks associated with AI deployment.
Licensing Agreements and Obligations of Educational Institutions in AI Area
Educational institutions that implement AI systems frequently enter into licensing agreements with technology providers. These agreements delineate the terms of use for AI tools, specifying the rights and responsibilities of both parties, as well as measures for data protection and user privacy. According to guidelines from the SCL AI Group, such contracts should encompass clauses that ensure algorithmic transparency, provisions for human oversight, and clear procedures for addressing potential issues arising from AI application ( SCL AI Group, 2023 ). Furthermore, licensing agreements must account for the dynamic capabilities of AI systems, including their ability to learn and adapt over time ( Belović, Spalević and Jakšić, 2025 ). This necessitates the inclusion of provisions for regular updates, performance evaluations, and mechanisms for addressing unforeseen consequences of AI behavior. Institutions must also ensure that these agreements align with ethical standards and educational objectives, maintaining the integrity of the learning environment. The negotiation of licensing agreements requires a thorough understanding of both the technical aspects of AI systems and the legal implications of their use. Institutions should engage legal experts and technologists in the drafting process to create comprehensive agreements that safeguard the interests of all stakeholders.
User Consent and Informed Consent in AI systems
The utilization of AI tools in education often necessitates obtaining user consent, particularly when it involves the collection and processing of personal data. Consent must be informed, meaning that users are clearly apprised of how AI systems operate, the nature of the data collected, and the purposes for which the data will be used. This practice aligns with principles of contract law, where informed consent is fundamental to the validity of agreements ( Linarelli, 2023 ). In educational contexts, obtaining informed consent is crucial for maintaining trust and ensuring compliance with data protection regulations. Institutions must provide transparent information about AI functionalities, data handling practices, and the potential implications of AI-driven decisions. This includes clarifying the extent to which AI influences assessments, learning pathways, and access to educational resources. Moreover, the process of obtaining consent should be ongoing, with opportunities for users to withdraw consent or opt out of specific AI applications. Institutions must establish protocols for managing consent preferences and ensuring that changes are respected and implemented promptly.
Internal Regulations and Digital Policies in AI Area
Educational institutions often establish internal regulations and digital policies governing the use of AI tools. While these documents may not constitute formal contracts, their acceptance by students and staff can confer contractual significance. For instance, policies may outline acceptable and unacceptable uses of AI tools, as well as the consequences of policy violations ( Bishop and Stothart, 2025 ). The development of internal policies should involve a collaborative process, incorporating input from educators, students, legal experts, and technologists. Policies must be clearly communicated, accessible, and regularly updated to reflect technological advancements and evolving legal standards. Additionally, institutions should implement training programs to ensure that all stakeholders understand the policies and their implications. This proactive approach fosters a culture of compliance and ethical AI use within the educational environment.
Liability for Errors and Harm in AI Area
The application of AI tools in education can lead to errors, such as inaccurate evaluations or discriminatory decisions. In such instances, the issue of legal liability arises. Analysis by DLA Piper suggests that liability may be shared between the AI technology provider and the educational institution, depending on who had control over the implementation and use of the system (DLA Piper, 2021). Determining liability involves assessing the roles and responsibilities of each party, the foreseeability of the error, and the adequacy of safeguards in place. Institutions must ensure that their contracts with AI providers include clear terms regarding liability, indemnification, and dispute resolution mechanisms. Institutions should establish internal procedures for monitoring AI performance, addressing grievances, and rectifying errors. This includes setting up oversight committees, conducting regular audits, and maintaining open channels of communication with stakeholders.
The Role of Contracts in Protecting User Rights in AI Area
Contracts play a pivotal role in protecting the rights of users of AI tools in education. Well-crafted agreements can provide mechanisms for appeals, access to information, and data protection. Recommendations from LexisNexis emphasize that contracts should include clauses that clearly define obligations related to privacy protection, data security, and dispute resolution processes ( Bishop and Stothart, 2025 ). In addition to contractual provisions, institutions must implement practical measures to uphold user rights. This includes ensuring transparency in AI decision-making processes, providing explanations for AI-driven outcomes, and facilitating user participation in decision-making where appropriate. By integrating legal safeguards with ethical practices, educational institutions can create an environment where AI technologies enhance learning while respecting and protecting the rights of all participants.
Pedagogical Implications of AI Integration in IT Education
The integration of generative AI tools such as ChatGPT, GitHub Copilot, and similar platforms into higher education, particularly within the field of Information Technology (IT), has raised significant pedagogical concerns. While these tools provide students with personalized assistance, access to additional resources, and faster problem-solving capabilities, they also give rise to a fundamental dilemma: are students using these tools to deepen their understanding, or are they substituting independent thinking with automated outputs? Prather et al. observe that many students rely on generative AI as a first-line solution, even when they do not fully grasp the underlying material ( Prather et al., 2023 ). In this context, AI is not employed as a learning facilitator but as a task executor-bypassing essential cognitive processes. This educational paradox results in students passing courses with the aid of AI tools without developing a deep understanding of core IT concepts, such as algorithmic complexity, memory management, or network architecture. The issue is further exacerbated by the fact that AI-generated solutions are often syntactically correct but semantically flawed. Lacking the requisite foundational knowledge, students are frequently unable to critically evaluate the quality of these outputs. Rather than cultivating analytical reasoning, students tend to perceive AI as an infallible source of knowledge, undermining the very objective of education: to develop autonomous problem-solvers capable of critically engaging with information. Educational institutions must therefore redefine the pedagogical use of AI-not as an automated assistant, but as a partner in reflective learning. One promising approach involves the introduction of assignments that require students to critically assess AI-generated content and articulate the rationale behind its use, thereby promoting metacognitive awareness and deeper conceptual engagement.
The Erosion of Foundational Skills: Programming and Algorithmic Thinking
One of the most pressing pedagogical challenges in the age of pervasive AI is the preservation of fundamental skills that constitute the backbone of IT education-namely, programming, algorithmic thinking, and systems comprehension. Increasingly, students in IT courses rely on AI to generate code, often bypassing the development of their own technical competencies. Zastudil et al. highlight that many students employ tools like Copilot to produce complete code solutions, often without understanding the logic that underpins them ( Zastudil et al., 2023 ). This trend fosters superficial learning that is not transferable to complex, real-world problems. As a result, students may become proficient tool users without acquiring the deep expertise necessary for software innovation, maintenance, or debugging. This deficiency becomes particularly evident during internships or employment, where employers expect independent problem-solving and critical reasoning skills. Furthermore, the decline in algorithmic thinking represents not only a technical shortfall but also an epistemological one. Learning algorithms and data structures develops abstract reasoning and logical problem-solving skills, which are applicable beyond IT. When these skills are neglected during the educational process, students risk losing both their professional competitiveness and their broader cognitive capacity to engage with a world increasingly structured by digital logic. Thus, educators face a dual challenge: to encourage the use of AI for accelerating learning while simultaneously safeguarding the integrity of core technical knowledge. One strategy involves combining
AI-assisted tasks with traditional, manually coded assignments that require students to explain and reflect upon the AI-generated outputs.
Pedagogical Strategies for Preserving Fundamental Knowledge
In today’s educational environment, where AI support is nearly ubiquitous, new pedagogical strategies are needed-not to exclude AI, but to incorporate it as a tool for deep learning. The key lies in reshaping the roles of instructors, curricula, and assessment mechanisms. An innovative model that addresses this challenge is the “Socratic Playground,” developed by a research team led by Hu ( Hu et al., 2025 ).
This AI-based environment adopts a Socratic method, where the system engages students in reflective dialogue, prompting them to follow a line of reasoning instead of immediately offering a solution. This strategy emphasizes the learning process rather than the end product. Another effective approach is the use of explanation-based pedagogy, where students are required to justify how they arrived at a solution, including how and why AI tools were employed. Instead of passively accepting the AI-generated content, students are encouraged to deconstruct, comment on, and adapt it.
This active engagement fosters deeper understanding and promotes the acquisition of transferable skills. A two-phase evaluation model can also be implemented. In the first phase, students complete assignments with the help of AI. In the second phase-without AI assistance-they must replicate, explain, or adapt their previous work. This model allows instructors to assess both conceptual understanding and independent thinking. Ultimately, the pedagogical integration of AI must prioritize self-reflective learning, in which students are not merely consumers of technological solutions but active participants in the construction of knowledge.
AI Literacy in IT Curricula
As regulatory frameworks such as the EU AI Act continue to evolve, the imperative to include AI literacy in IT curricula becomes more urgent. AI literacy in this context goes far beyond operational proficiency with AI tools-it entails a comprehensive understanding of how such systems function, their limitations, their data-processing mechanisms, and their broader ethical, legal, and societal implications. Mekić et al. emphasize that AI literacy must encompass not only technical training but also awareness of legal frameworks, ethical standards, and the social consequences of AI deployment ( Mekić et al., 2024 ). Students in IT programs should develop fluency in key areas such as machine learning fundamentals, large language models (LLMs), algorithmic bias, and responsible AI development. To build sustainable and future-ready curricula, educational institutions must integrate technical knowledge with a humanistic perspective. Interdisciplinary courses combining computer science, law, ethics, and education can significantly enhance students’ capacity to navigate the increasingly complex AI ecosystem with both competence and conscience ( Spalević et al., 2024 ). AI literacy should become a mandatory graduation requirement, akin to programming fluency or database management skills. In today’s IT education landscape, it is no longer sufficient to understand how to use AI tools; students must understand the systems behind them—and the implications of deploying those systems in real-world environments.
Recommendations for a Sustainable and Legally Aligned Development of Digital Education
The rapid digital transformation of education, accelerated by the integration of artificial intelligence (AI), demands that institutions navigate a delicate balance between innovation, legal compliance, and sustainability. While digital technologies offer unprecedented opportunities for personalized learning, increased access, and administrative efficiency, they also raise fundamental questions regarding data governance, legal accountability, and long-term educational quality. This chapter offers a comprehensive set of recommendations for the sustainable and legally aligned development of digital education. These recommendations draw on policy reports, international frameworks, and academic studies, with the aim of helping educational stakeholders ensure that digitalization supports-not undermines-the integrity, equity, and legal foundations of education.
Embedding Sustainability in Digital Education
One of the critical tasks in digital transformation is integrating the concept of sustainability-not only in ecological terms but also in pedagogical, technological, and institutional dimensions. García-Hernández et al. emphasize that sustainable digital education must rest on open educational resources (OER), digital inclusion, responsible use of AI tools, and pedagogies that support long-term skill development (García-Hernández et al., 2023). Sustainable practices also imply minimizing reliance on proprietary technologies that lock institutions into long-term dependencies. Instead, educational institutions are encouraged to adopt open-source tools and promote digital literacy that enables students to become not just users but critical evaluators and co-creators of technology. Furthermore, sustainability involves recognizing the longterm risks of over-automating educational processes. AI systems should not replace human educators in fundamental pedagogical functions such as mentoring, assessment, and curriculum design. Rather, they should augment teaching in ways that preserve space for reflection, creativity, and theoretical grounding.
Legal Frameworks and Compliance: UNESCO’s Common Digital Education Framework
As legal and ethical challenges grow in complexity, institutions require robust reference points for aligning their digital initiatives. In 2024, UNESCO launched a common framework to guide the digital transformation of education, identifying five interconnected domains: coordination and governance, infrastructure, digital skills and capacities, curriculum and content, and data governance ( UNESCO, 2024 ). This framework stresses that legal compliance must not be an afterthought, but rather a foundational layer of digital education policy. For example, institutions must ensure that all AI systems used in grading, admissions, or content delivery adhere to principles of fairness, transparency, and informed consent. This is particularly critical in light of the EU AI Act, which categorizes education as a “high-risk” domain, imposing obligations such as risk assessments, documentation of decision-making processes, and human oversight mechanisms (Artificial Intelligence Act, n.d.). Compliance extends to international data protection standards, such as the GDPR ( European Parliament and Council - Regulation (EU) 2016/679, 2016 ), which mandates lawful bases for data processing, user rights to access and correction, and limitations on automated profiling. Educational institutions must design their AI strategies to include privacy-by-design and ethics-by-default features from the earliest stages of system adoption.
The Digital Education Action Plan: Strategic Guidance from the EU
The European Commission’s Digital Education Action Plan (2021–2027) serves as a key strategic document that defines the EU’s vision for high-quality, inclusive, and accessible digital education. The Plan outlines two key priorities: (1) fostering a high-performing digital education ecosystem, and (2) enhancing digital skills and competencies for digital transformation ( European Commission “Digital Education Action Plan (2021–2027)”, 2020 ). For institutions, this implies a shift from sporadic technological adoption toward comprehensive and intentional strategies. One of the central recommendations is to institutionalize digital education-not through isolated experiments, but by integrating digital innovation into core policies, budgets, teacher training programs, and evaluation frameworks. The Action Plan also highlights the importance of ensuring equitable access to digital infrastructure and skills. Without addressing the digital divide, digital education risks reinforcing social inequalities rather than alleviating them. Therefore, sustainability is also a matter of social justice: systems must be designed to serve all learners, especially those from underrepresented or marginalized backgrounds.
Frameworks and Methodologies: OECD’s Analytical Approach
To support evidence-based decision-making, the OECD has proposed a sophisticated analytical framework to help countries evaluate their digital education strategies ( OECD “Policies for the Digital Transformation of School Education: Proposed analytical framework and methodology”, 2023 ). The framework includes dimensions such as institutional governance, financial sustainability, technological interoperability, and impact measurement. This structured methodology allows policymakers and educators to move beyond anecdotal evaluations and instead design interventions grounded in measurable goals. For example, the use of AI tools in classrooms can be assessed not only in terms of efficiency but also with respect to their effects on learning outcomes, privacy risks, and student agency. A notable recommendation from the OECD is the creation of inter-sectoral task forces, where educators, legal experts, technologists, and students collaborate to evaluate and guide the digital transformation process. Such collaboration ensures that decisions are not siloed in IT departments, but reflect educational, ethical, and legal values.
Sustainable Data Governance in Education
As the use of data-intensive AI systems increases, sustainable data governance becomes a foundational requirement. Lee outlines seven principles for developing sustainable data practices in education: (1) data stewardship, (2) data quality, (3) ethical use, (4) transparency, (5) environmental impact, (6) supportive policy, and (7) continuous improvement ( Lee, 2025 ). Implementing these principles requires more than technical fixes. Institutions must build cultures of accountability where data collection is justified, minimized, and explained to all stakeholders. For example, when AI systems are used to monitor student behavior or performance, transparency about what is being tracked-and why-is essential. Educational institutions should also conduct environmental audits of their digital infrastructure. AI systems, particularly those based on large models, consume significant computational resources. Incorporating “green AI” principles, such as efficiency optimization and server sustainability, is necessary for aligning with broader ecological goals. Institutions must publish data ethics charters that define internal standards for data use, sharing, and storage. These charters help ensure alignment with both legal norms (e.g., GDPR) and institutional values such as academic freedom, fairness, and student autonomy.
Integrating Sustainability into Computing and IT Education
Sustainability should not only guide how we use digital systems, but also how we teach about them. Moreira et al. propose a roadmap for integrating sustainability into computing education, which includes key domains such as systems thinking, environmental and social impact, ethical reflection, and legal literacy ( Moreira et al., 2024 ). Their approach is based on the premise that future IT professionals must not only understand algorithms and systems design but also be capable of assessing how their creations affect society and the planet. This can be achieved through course modules on sustainable design patterns, low-impact coding, algorithmic accountability, and legal constraints in software development. Moreover, the roadmap suggests that sustainability be treated as a transversal competency across all courses-not isolated in elective modules. This ensures that every computing student, regardless of specialization, is equipped to navigate the increasingly complex terrain of responsible digital innovation. Such curricular transformations require collaboration between academic departments, policy bodies, and professional associations. Accreditation agencies should also update their criteria to include sustainability and legal compliance as core competencies for computing degrees.
Policy Recommendations and Institutional Responsibilities
In light of the discussions above, this section outlines concrete policy and institutional recommendations for achieving a sustainable and legally aligned digital education ecosystem:
-
1. Adopt and adapt UNESCO’s digital transformation framework to local legal and pedagogical contexts, ensuring cross-sectoral coordination and ethical governance.
-
2. Integrate AI ethics and legal compliance training into teacher education programs, IT staff training, and student orientation curricula.
-
3. Develop internal regulatory instruments, such as AI usage policies, algorithmic transparency reports, and data ethics charters.
-
4. Establish ethics and compliance committees tasked with conducting impact assessments and reviewing all new AI integrations.
-
5. Mandate open data practices and open educational resources (OER) to reduce dependency on proprietary platforms and ensure long-term knowledge accessibility.
-
6. Introduce interdisciplinary courses on sustainability in computing, co-taught by faculty from IT, law, education, and environmental sciences.
-
7. Perform regular audits of AI system performance, bias, environmental impact, and legal compliance, publishing transparent reports for stakeholders.
-
8. Incentivize open-source development and responsible innovation through grants, awards, and institutional partnerships.
Conclusions
The integration of artificial intelligence into education represents a paradigmatic shift in both pedagogical practice and legal accountability. As this paper has demonstrated, the European Union’s AI Act offers more than a regulatory checklist-it introduces a framework that fundamentally redefines institutional responsibilities, user rights, and the structure of educational relationships in the digital era.
One of the core findings of this paper is that the widespread use of AI tools in IT education poses both pedagogical and legal challenges. From a pedagogical standpoint, students are increasingly relying on AI to solve problems, write code, and complete assignments, often without understanding the underlying principles. This trend threatens the development of critical foundational skills in programming, algorithmic thinking, and system design. Without these competencies, students risk becoming mere operators of AI rather than creators or critics of it. Moreover, this development risks undermining the very goals of IT education: intellectual autonomy, technical mastery, and the capacity to innovate responsibly.
From a legal perspective, the EU AI Act classifies education as a high-risk domain. This classification carries significant implications: institutions must now ensure transparency, conduct risk assessments, and implement mechanisms for human oversight. The deployment of AI in educational settings is no longer a neutral technological choice but a legally binding action that generates contractual obligations. Whether through licensing agreements, consent forms, or internal digital policies, institutions must respect principles of informed consent, fairness, and accountability. Failure to do so could result in civil liability-not only regulatory non-compliance.
Furthermore, the paper has shown that sustainability must be a guiding principle in digital education. Sustainability here refers not only to environmental responsibility, but to pedagogical and institutional resilience. As emphasized in the final chapter, integrating sustainability into computing education involves embedding ethical reflection, legal awareness, and systems thinking into curricula. This is essential to developing future IT professionals who are not only technically capable but also socially and legally literate. To ensure a sustainable and legally compliant future for digital education, several actionable recommendations have been outlined. These include adopting UNESCO’s framework for digital transformation, implementing EU and GDPR-aligned policies, developing interdisciplinary courses on AI ethics and law, and conducting regular audits of AI systems. Institutions are also encouraged to reduce dependency on proprietary platforms by investing in open-source and transparent AI tools.
EU AI Act presents both a challenge and an opportunity. It challenges institutions to rethink their use of AI-not merely in terms of efficiency, but in terms of ethics, legality, and long-term educational goals. At the same time, it offers an opportunity to realign digital education with the foundational values of higher education: knowledge, critical inquiry, equity, and integrity. Only by integrating AI in a balanced, thoughtful, and law-conscious manner can education remain both innovative and principled in the era of intelligent machines.
Conflict of interests
The authors declare no conflict of interest.
Author Contributions
Conceptualization, R.I., M.P. and M.S.; methodology, R.I., D.V, N.J. and M.J.; formal analysis, R.I. and D.A.; writing—original draft preparation, R.I., D. A. and N. J.; writing—review and editing, R.I., M.J., D.A., and M.S. All authors have read and agreed to the published version of the manuscript.