Personalized Learning through Artificial Intelligence: Opportunities, Risks, and Policy Perspectives
Автор: Lazar Stošić, Aleksandar Radonjić, Olja Krčadinac, Borivoje Baltezarević, Olga Mikhailova
Журнал: International Journal of Cognitive Research in Science, Engineering and Education @ijcrsee
Рубрика: Review articles
Статья в выпуске: 2 vol.13, 2025 года.
Бесплатный доступ
Artificial Intelligence (AI) is redefining the landscape of personalized education by enabling adaptive systems that respond dynamically to individual learning needs. This paper explores how AI technologies-including machine learning, big data analytics, and intelligent tutoring systems-support the transformation of pedagogical models. Key opportunities discussed include real-time personalization of content delivery, increased student motivation, and inclusive learning environments. At the same time, the study critically examines potential risks, such as data privacy concerns, algorithmic bias, and the erosion of human-centered pedagogy. Policy implications are addressed with recommendations for regulatory frameworks to ensure ethical and responsible AI integration into education. The paper emphasizes the need for empirical research to validate AI-driven models in diverse educational settings. By aligning technological innovation with humanistic values, the paper contributes to ongoing discourse on how AI can support-not supplant-the role of educators. The findings provide a foundation for future research and policy design aimed at creating equitable, transparent, and effective personalized learning ecosystems.
Artificial intelligence, personalized learning, education policy, algorithmic ethics, adaptive learning systems
Короткий адрес: https://sciup.org/170210295
IDR: 170210295 | DOI: 10.23947/2334-8496-2025-13-2-541-549
Текст научной статьи Personalized Learning through Artificial Intelligence: Opportunities, Risks, and Policy Perspectives
© 2025 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license .
raised, the weaker ones will diminish even more knowledge. If the criterion is lowered, then children with advanced knowledge will remain in further progress. It is precisely in this gap that artificial intelligence can make a contribution. In relation to the needs of individuals, the artificial intelligence can determine and propose a learning plan and program for each individual and thus adapt learning itself to personalized learning. Personalized learning adapts the content, methods and pace of learning to the individual needs, interests and styles of each student. It is focused on the needs, abilities and interests of individual students. What separates personalized learning from other learning is that personalized learning focuses on the student as an active creator of their own educational experience. Students who receive content in a format that suits their learning styles report higher levels of motivation, engagement and success ( Graf et al., 2009 ). Personalized approaches lead to greater satisfaction and more efficient learning. When talking about personalized learning, we cannot leave out artificial intelligence (AI) and machine learning, which are increasingly being integrated into the education system ( Stošić and Janković, 2024 ). A personalized approach uses analytics tools and technologies, including artificial intelligence (AI), to continuously collect and analyze student data to adapt instructional materials and strategies in real time ( Talapbayeva et al., 2024 ). This approach contrasts with traditional “one-size-fits-all” methods by accommodating different learning paces, preferences, and goals, allowing students to progress based on mastery, not age or grade level ( Knox, 2020 ). Personalized learning is based on the premise that every student can succeed if they are provided with the right resources and support ( Duckett, 2010 ).
Personalized learning has its roots in the early 1960s when the first generation of educational software (eg the PLATO system developed at the University of Illinois) enabled basic forms of adaptive learning. Already in the 1990s, with the advent of the Internet, learning management systems (LMS) appeared that allowed greater flexibility in the organization of the teaching process.
Personalized learning experienced a real rising in the 21st century with the development of artificial intelligence (AI), learning analytics and big data technologies. Certainly, digitalization supported by artificial intelligence has transformed education itself, making it far more accessible, efficient and attractive to every individual. Due to the mass use, the lack of professional staff, the question of ethics, the bias of algorithms and even privacy itself is raised.
An overview of the modern application of AI in personalized learning
When we talk about the application of AI tools in personalized learning, we primarily mean adaptive tutors, intelligent tutors and recommendation systems. Based on insights from AI systems, teachers can create differentiated lesson plans. These tools represent the foundation of modern AI applications in education and support personalization, self-directed learning and improved student evaluation ( Duong et al., 2025 ). AI systems provide immediate feedback on accuracy and understanding, thereby speeding up the learning and correction process. AI can adjust the order of teaching units – eg. a student with an auditory style will be offered more video lessons and dictated explanations, while visual learners will be offered infographics, maps and animations. The authors emphasize that AI in education can enhance educational experiences by making learning more personalized, flexible, and engaging, without replacing teachers but rather augmenting their capabilities ( Duong et al., 2025 ). AI tools offer analytics on student progress and difficulties, helping teachers plan interventions more precisely ( Holmes et al., 2019 ). AI tools allow students to learn at their own pace. This adaptability helps students overcome frustration caused by content that is too difficult or too easy, thereby increasing their sense of competence – an important factor in internal motivation ( Deci and Ryan, 2000 ). AI tools enable students with specific disabilities to master the material more easily through the use of assistive technologies and adapted educational paths, which contributes to more equal education and greater motivation.
Adaptive tutors use artificial intelligence to adapt content to an individual’s learning in real time. In addition to a personalized experience, adaptive tutors can also provide students with an interactive learning experience. Adaptive tutors are web-based interactive teaching systems that evolved from intelligent teaching systems (Kamardeen 2014; Khawaja, Chen, and Marcus 2014). Their goal is to personalize the educational process to suit each individual’s pace and learning style, rather than relying on traditional methods of learning. Student evaluations indicate that students prefer the computer-based approach over the traditional approach (Nirmalakhandan, 2007). The same author argues that based on students’ performance on two types of paper-based tests he conducted, he believes that computerized adaptive tutorials can be a valid tool in assessing their problem-solving skills. In contrast to students, adaptive tutorials provide educators with the ability to easily monitor and analyze student interactions with the tutorial. Examples of adaptive tutorials include the Knewton and DreamBox Learning platforms. What all platforms have in common is that they provide an environment where students can work at their own pace, at a time and place that suits them. Incorporating adaptive tutorials that include a combination of guided practice, interactive opportunities and immediate, adaptive feedback provides students with learning opportunities that can improve their confidence, understanding and learning outcomes (Weltman, 2025).
Intelligent tutors (ITS - Intelligent Tutoring Systems) simulate the work of a human teacher. They not only track student responses, but also model their knowledge, detect faulty thinking, and offer feedback aimed at correcting specific cognitive errors. In the beginning, there all talk about teaching with the computer assistance. Today, that same computer-assisted instruction has evolved into Intelligent Instructional Systems (ITS). Intelligent tutors provide real-time feedback, indicating not only the correctness of answers but also the causes of errors. Such support develops metacognitive skills and encourages students to reflect on their own knowledge, which contributes to academic success ( VanLehn, 2011 ). ITS systems often use complex learner models and techniques from artificial intelligence, such as expert systems or machine learning ( Suresh, Kolluru, and Ubaidullah, 2025 ). Examples are AutoTutor and Cognitive Tutor.
With the increasing development of ICT and their use in education, the problem of large amounts of information has arisen. This large amount is reflected in the time that authors spend searching for useful information and extracting the necessary and requested information. It is precisely because of these problems that recommender systems have emerged to solve these problems. Recommender systems in education work in such a way as to provide recommendations for teaching resources, courses or tasks. Recommender systems have become a very commonly used tool in various fields such as e-commerce, social networks, digital media and books ( Nadine, Cao and Deng, 2016 ), as well as in the field of education, in teaching and academic advisory services ( Deschênes, 2020 ). Recommender systems are software tools that assist users in the decision-making process by applying information filtering, data mining, and predictive algorithms ( Venkatraman, 2017 ). Based on previous activities, interests, or knowledge level, the system analyzes the data and recommends the next steps in learning. The recommender system can be very useful for students when choosing elective courses ( Dwivedi and Roshni, 2017 ). Burke ( Burke, 2007 ) differentiate six different classes of recommendation approaches: collaborative filtering (CF), content-based filtering (CBF), knowledge-based filtering, context-based filtering, demographic filtering, and hybrid filtering. Typical representatives of recommender systems in education are e-learning platforms Coursera, Edmodo, or Duolingo. Coursera, for example, uses AI to personalize the educational experience. It offers adaptive course recommendations based on the user’s previous interests, knowledge level and goals. AI is used to automatically evaluate assignments, generate feedback, and automatically transcribe lectures. The platform collaborates with universities and companies, enabling massive open education (MOOC). Coursera certainly helps teachers to assess and analyze student performance more quickly. A similar platform is Duolingo, which uses intelligent tutors and adaptive teaching algorithms. Platforms like Duolingo use game elements (points, levels, challenges) to encourage motivation. Studies show that such approaches significantly increase student engagement and frequency of use of educational resources ( Vesselinov and Grego, 2012 ). The system uses gamification, natural language processing (NLP) and machine learning to adjust the difficulty of tasks, recognize student weaknesses and offer exercises that strengthen specific skills. Using gamified and interactive systems like Duolingo increases engagement and regular learning ( Loewen et al., 2020 ). On the other hand, Chinese education platform Squirrel AI uses advanced intelligent teaching systems that break down skills and concepts into the smallest possible units to accurately detect knowledge gaps.
By type of education, recommender systems cover mainly formal education, especially student-oriented ( Urdaneta-Ponte et al., 2021 ). AI tools such as adaptive and intelligent tutors adapt the learning content to the individual needs, pace and style of the learner ( Luckin et al., 2016 ). All these platforms enable learning without time and space limitations, which is especially important for students with specific needs or from remote areas.
Learning support for students with special needs Human-Centered AI for Inclusive Education
AI tools significantly contribute to inclusive education, especially when it comes to students with special educational needs. By using artificial intelligence-based technology, it is possible to provide personalized support, assistive functions and an environment that allows each student to progress according to their abilities. AI systems can automatically adjust task complexity, presentation speed, and content modality (text, audio, video) according to the abilities of students with learning disabilities, autism, or ADHD (Attention-deficit/hyperactivity disorder). Adaptive tutors enable learning without pressure and at a safe pace ( Al-Azawei et al., 2017 ). Tools for converting speech into text (speech-to-text) and vice versa, image and voice recognition, as well as automatic transcription allow students with visual, hearing or motor disabilities to participate equally in classes. Systems such as Microsoft Immersive Reader or Seeing AI use NLP and visual recognition for this purpose. AI can recognize signs of frustration, inattention or cognitive overload by analyzing facial expressions, body language or behavior (affective computing). This enables timely intervention and additional help for students who have difficulties in emotional regulation. AI enables detailed assessment of student progress, identifies areas of weakness and recommends specific interventions. This is especially useful for students with intellectual disabilities, as it requires clearly structured and goal-oriented teaching.
Risks and challenges of applying AI in personalized learning
Ethics and protection of student data privacy are key challenges in the application of artificial intelligence (AI) in education. While AI tools offer numerous advantages, their use must be aligned with data protection laws and ethical principles that protect learners, especially children and vulnerable groups. AI systems collect large amounts of students’ personal data: names, behavioral patterns, scores, and even emotional reactions. Unethical use of this data – without consent or a clear purpose – can lead to violations of student privacy and security ( Williamson and Eynon, 2020 ). Lawmaking that emphasize accountability, transparency in AI, mitigating bias, and comprehensive application of ethical principles is an effective strategy for dealing with “dark AI” ( Bjelajac, Filipović, and Stošić, 2023, p. 530 ). Legal frameworks such as GDPR (EU) and FERPA (US) require explicit informed consent from parents and students for data collection and processing. Ethics dictate that educational institutions clearly communicate why data is collected, how it is used and who has access ( Regan and Jesse, 2019 ). AI models can be biased if they are trained on unrepresentative data, which can lead to unfair educational recommendations, grading or labeling of students. Ethics requires regular testing and transparency of algorithms to ensure fairness. Ethics mandates the limitation of covert forms of monitoring and the use of only those technologies that directly contribute to the educational goal.
Algorithmic biases pose serious ethical challenges in the application of artificial intelligence (AI) in education. These problems occur when algorithms make decisions that are unfair, discriminatory, or based on non-objective patterns in the data. Bias occurs when the training data is incomplete, historically biased, or unrepresentative. For example, if an AI system learns from data that favors certain demographic groups (eg, students from urban areas), the resulting recommendations may be unfair to students from other areas ( Noble, 2018 ). AI systems can classify students according to past performance, behavior, or demographic characteristics, creating a risk of labeling (eg, “bad students,” “low-risk students”) that can affect the treatment they receive and teacher expectations. This type of prediction can self-confirm negative educational outcomes ( O’Neil, 2017 ). Algorithms that recommend educational content or ways of learning may ignore cultural, linguistic or cognitive differences. If students of different genders, languages or ethnic groups are not offered diverse content, educational inequalities may be reproduced. In order to mitigate bias, it is necessary to use diverse and representative data sets, conduct regular audits of algorithms, involve ethical committees in the development of AI systems, increase transparency in the way recommendations or decisions are made ( Binns, 2018 ).
The danger of automation and depersonalization of education is becoming more pronounced with the increase in the application of artificial intelligence (AI) in educational systems. Although AI offers numerous advantages, an over-reliance on automated processes can compromise the core pedagogical values and humane aspects of teaching. Personal contact between teacher and student plays a key role in education - not only in the transfer of knowledge, but also in the development of empathy, support and motivation. Automated tutors and evaluators cannot adequately replace the emotional intelligence and ethical judgment of teachers (Selwyn, 2019). AI systems often operate according to predefined patterns and metrics, which can lead to a standardized approach to learning. This approach ignores contextual, cultural and individual factors, reducing the quality and relevance of education (Knox, 2020).
Policy and Regulatory Perspectives on AI in Education
Aware of the shortcomings and threats we face, it is certainly necessary to set clear ethical guidelines. In general, national and institutional codes of ethics for the use of AI in education should be clearly formulated. “The incorporation of artificial intelligence in education has enormous potential to transform the educational process, providing benefits such as increased productivity and tailored teaching” ( Stošić and Janković, 2023, p. 68 ). It’s not enough to just use AI tools. It is necessary to ensure that AI tools consider the rights of students, with a special emphasis on privacy protection, consent and transparency. The use of AI for mass surveillance or profiling of students without control should definitely be banned.
UNESCO’s first global standard on the ethics of artificial intelligence – the “Recommendation on the Ethics of Artificial Intelligence”, adopted in 2021 and applicable to all 194 UNESCO member states . The recommendation clearly states that tools should be developed to help member states build a comprehensive picture of how prepared they are to ethically and responsibly deploy AI for their citizens. The Global Observatory on Ethics and Governance of Artificial Intelligence aims to provide a global resource for policymakers, regulators, academics, the private sector and civil society to find solutions to the most pressing challenges posed by AI .
Regardless of social and geographic origin, fair access and digital inclusion should be ensured where educational policies include strategies to reduce the digital gap. All students, regardless of gender equality, should have access to devices and the Internet. More attention should be given to the development of an AI strategy aimed at rural and marginalized communities.
Teachers should definitely be trained in the critical use of AI tools and not just in their use. Teachers themselves must be involved in the decision-making and training process. We should strive to introduce a professional development program that includes digital pedagogy and AI ethics. Educators need to understand how AI can affect the education system as well as understand where this technology is going with its progress. Teachers must know and understand where AI can replace humans and where it can only complement and help them. We must constantly point out the dangers that really exist, and work permanently on this aspect of mental health of individuals, nations, and global society ( Bjelajac, Filipović, and Stošić, 2022 ).
Standardizing the use of AI in schools and universities
Standardizing the use of artificial intelligence (AI) in schools and universities is a key step towards ensuring the quality, fair and ethical application of technology in educational systems. The goal of standardization is to establish common guidelines, technical frameworks and pedagogical principles for the integration of AI into the educational process. Ministries of education and educational institutions should define official guidelines for the use of AI tools, including criteria for their accreditation, security and compliance with pedagogical goals. It is necessary to standardize procedures for the selection, evaluation and updating of AI systems, in cooperation with regulatory bodies and experts in education and ethics. The Ethical Guidelines for Artificial Intelligence in Education of the Ministry of Education of China outlines principles for ensuring the responsible use of artificial intelligence technologies in educational institutions (Wei and Niemi, 2023). China has developed the National Guidelines for the Ethical Use of AI in Education (2021), while the EU is laying the groundwork for responsible use through the Ethical Guidelines for Trustworthy AI. Primary school children in China are prohibited from using AI tools that generate open-source content as their own, while educators must ensure that AI can complement, but not replace, human-led teaching . According to these guidelines, students are not allowed to independently use open source AI-based generators. In contrast, secondary school students can use and explore AI-generated content. Parents are advised to monitor how their children use artificial intelligence and to protect their personal data. They are urged to teach their children not to rely on technology alone but to prioritize emotional engagement. China’s Ministry of Education has issued ethical guidelines that emphasize principles such as fairness, transparency and accountability.
All these guidelines and regulations directly mandate that the law on data privacy be strictly followed, thus ensuring the protection of the data of the students themselves. Educational institutions themselves must provide security measures to ensure a measure of protection of personal data. Schools are preparing their children for future labor markets by increasing digital literacy by implementing artificial intelligence technologies in the curriculum. The implementation of AI technology alone is not enough for progress, but schools are obliged to monitor the application of artificial intelligence and its impact on learning outcomes. All educators must receive training on the effective use of artificial intelligence. The government is working with technology companies to develop artificial intelligence solutions that meet educational needs while adhering to ethical standards. China’s approach to regulating AI in education focuses on balancing innovation with ethical considerations, aiming to improve educational outcomes while protecting the rights and well-being of students. Important themes include inclusion, personalization, justice, safety, transparency, and responsibility, which serve as the foundation for developing AI in education.
AI tools must certainly be aligned with national curricula and exam standards. Standardization must include pedagogical performance indicators, to give insight into whether the AI tool contributes to the development of students’ knowledge, skills, engagement and critical thinking. It is necessary to introduce quality indicators such as precision of recommendations, adaptation time and level of personalization. A standardized framework must include mandatory protection of student privacy, as well as guidelines for algorithm transparency and the possibility of human intervention.
Future Directions and Research Recommendations
In modern education, the need for training teachers and educators to work with artificial intelligence tools is becoming increasingly evident. The introduction of artificial intelligence in schools and universities requires not only technical integration, but also the development of digital and pedagogical competences among teaching staff. Educators must understand how AI works, what its capabilities and limitations are, and how it affects learning, assessment, and interaction processes. This includes a basic understanding of algorithms, personalization, learning analytics and ethical implications ( Zawacki-Richter et al., 2019 ). Training should not focus exclusively on technical skills, but also on the pedagogical integration of AI: how to use adaptive tutors, recommendation systems or evaluation tools for the purpose of differentiated and inclusive teaching. Educators must be trained to understand the legal and ethical aspects of using AI tools, especially in relation to the protection of student privacy, consent and transparency of algorithms. Training should also include recognition of biases and automated decision-making problems. Given the rapid evolution of AI technologies, teacher training must be continuous, with self-study resources, seminars and accredited courses available. Institutional support through strategies for the digital transformation of the educational system is also needed.
In general, the application of AI in education has a pedagogical value. On the one hand, there is personalization, adaptive learning, real-time learning support versus depersonalization, reducing studentteacher interactions. With the application of modern technologies, we have automated assessment, time saving, scalability. The use of technologies also has its drawbacks, namely technical errors, dependence on platforms, low interoperability. AI tools in education certainly provide progress, but one should definitely take care of discrimination, stereotyping, unauthorized data collection, surveillance, algorithmic bias... As for algorithmic bias, its impact in the context of differences between genders, ethnic groups and students with developmental disabilities has not been sufficiently investigated ( Holmes et al., 2022 ). AI tools provide support in evaluation but reduce the pedagogical autonomy of teachers.
Conclusion
AI in education represents a significant step towards personalization and more effective learning. Through timely support, motivational strategies and analytical insights, these tools help increase both academic success and intrinsic student motivation. Automatic detection of learning styles through AI analytics is a key step towards creating an intelligent, adaptive education system that does not treat all students equally, but in accordance with their cognitive profiles. Students with special needs are given tools that allow them to achieve their full potential much more easily.
Every phase of the development and implementation of AI tools in education should have integrated ethics and privacy protection. Policies need to be in place that ensure safety, fairness, and student autonomy in the digital environment. Policies need to strike a balance between innovation and accountability. As much as AI can improve teaching itself, it is more important to preserve the human element of teaching. The teacher is the main who should lead and control the teaching process and not to be replaced by technology. Technology and pedagogy must support each other. The standardization of AI in education must be multidimensional – technical, pedagogical, and ethical. Without teacher training, AI cannot be properly used and exploited. Teachers must be motivated and properly guided in using AI and its tools in teaching in order to derive the benefits of using it. Without proper training, the use of artificial intelligence itself will be reduced to only the initial form of searching and using information.
Future research should move beyond the development of technical functionalities and focus on several key directions:
-
• Longitudinal studies on the actual impact of AI-supported personalization on student performance, engagement, and motivation in diverse educational contexts.
-
• Critical analysis of algorithmic decision-making to examine fairness and bias in adaptive learning environments, especially among students of different socio-economic backgrounds, ethnicities, or learning abilities.
-
• Evaluation of teacher-AI interaction models , including how AI tools influence teacher autonomy, pedagogical strategies, and classroom dynamics.
-
• Exploration of students’ perceptions and emotional responses to AI-mediated learning, particularly in terms of trust, agency, and well-being.
-
• Policy-oriented research aimed at developing governance frameworks that regulate the ethical design, deployment, and accountability of AI in education, with special attention to low-resource settings.
Implications for Future Research
Regardless of the positive effects and applications of artificial intelligence in education, far-reaching research is certainly needed to have a true picture of the application of artificial intelligence in education. The research itself should not encompass the technical capabilities of artificial intelligence, but must encompass ethical, pedagogical and socio-cultural dimensions. Certainly, a critical and reflective approach will provide a better picture and better understanding of how artificial intelligence technology affects not only educational outcomes, but also human relationships, student activity and institutional practice.
Key directions for future research include:
-
• Longitudinal impact studies to assess the sustained effects of AI-based personalization on learning outcomes, engagement, and motivation across diverse educational levels and demographic groups.
-
• Critical analysis of algorithmic bias and transparency, particularly regarding how AI systems classify, recommend, or assess students, and how such classifications may reproduce social or educational inequalities.
-
• Exploration of AI-teacher collaboration models, with emphasis on how AI can augment rather than undermine teacher autonomy, pedagogical creativity, and professional judgment.
-
• Examination of student perspectives, focusing on trust, acceptance, resistance, and emotional responses to AI-mediated learning environments, particularly among vulnerable populations.
-
• Development of policy and governance frameworks that support the ethical deployment of AI in education, ensuring data protection, accountability, and inclusivity, especially in low-income and rural educational settings.
Such interdisciplinary and multi-perspective research will contribute to the development of a just, hu- man-centered, and sustainable AI-enhanced educational ecosystem. Without addressing these broader issues, technological solutions can become mere cutting-edge interventions and attempts to transform education.
Conflict of interests
The authors declare no conflict of interest.
Author Contributions
Conceptualization, L.S., A.R., formal analysis, O.K., B.B; methodology, O.M.; writing - original draft preparation, L.S.; writing - review and editing, A.R., O.K., B.B., O.M. All authors have read and agreed to the published version of the manuscript.