Evaluating LLM-Assisted CLO–PLO Alignment Decisions for AUN-QA–Aligned Curriculum Mapping
Автор: Suwut Tumthong, Nichanun Samakthai, Pinyaphat Tasatanattakool
Журнал: International Journal of Modern Education and Computer Science @ijmecs
Статья в выпуске: 3 vol.18, 2026 года.
Бесплатный доступ
Curriculum mapping for AUN-QA is often time-consuming and prone to inconsistency because learning-outcome evidence is scattered across multiple courses and program documents. This study examines the reliability of large language models in supporting course learning outcome–program learning outcome (CLO–PLO) alignment decisions for AUN-QA–aligned curriculum mapping. Using Mechatronics Engineering curriculum materials (2022–2024), AUN-QA v4.0 indicators were operationalized into a structured evidence schema to guide prompting and interpretation. GPT-4 produced 120 CLO–PLO alignment decisions, which were independently annotated by two domain experts and adjudicated by a third expert to establish reference labels. Model–reference agreement was evaluated using precision, recall, F1-score, and Cohen’s kappa (κ), yielding 0.89, 0.85, 0.87, and 0.81, respectively. We also developed a dashboard that summarizes alignment coverage and supports PDCA-based curriculum improvement by flagging potential gaps and redundancies. The findings suggest that LLM-assisted alignment can reduce mapping workload and improve auditability while remaining consistent with expert judgment, enabling more scalable evidence-based AUN-QA evaluation.
AUN-QA Framework, Curriculum Management, Large Language Models, Learning Outcome Alignment, Quality Assurance in Higher Education, Educational Data Analytics
Короткий адрес: https://sciup.org/15020360
IDR: 15020360 | DOI: 10.5815/ijmecs.2026.03.08