Online Framework of Examination for Evaluating Learner’s Knowledge

Автор: Bidyut Das, Rupa Debnath, Debnarayan Khatua

Журнал: International Journal of Education and Management Engineering @ijeme

Статья в выпуске: 6 vol.14, 2024 года.

Бесплатный доступ

The COVID-19 pandemic has necessitated a shift to online assessments, posing significant challenges for teachers in fairly evaluating student performance. The absence of invigilation has led to widespread cheating, with students copying answers from the Internet or top-ranked peers. This paper addresses these issues by proposing guidelines and techniques for fair student assessment without invigilation. The research begins with an analysis of traditional assessment methods and their limitations in the context of unmonitored online exams. It then explores various online examination frameworks, including multiple-choice questions, short-answer questions, and interactive simulations. The study identifies key weaknesses in current online assessment practices and highlights the potential of advanced online examination frameworks. By implementing the suggested techniques, educators can improve the reliability and fairness of online assessments, ensuring a more accurate evaluation of students' knowledge. This article serves as a valuable resource for educators, instructional designers, and e-learning professionals seeking to enhance the efficacy of online assessments.

Еще

Traditional examination, Online examination, Question generation, Automatic Assessment, Educational learning

Короткий адрес: https://sciup.org/15019554

IDR: 15019554   |   DOI: 10.5815/ijeme.2024.06.05

Список литературы Online Framework of Examination for Evaluating Learner’s Knowledge

  • Barak Rosenshine, Hildegard Froehlich, and Inshad Fakhouri. Systematic instruction. The new handbook of research on music teaching and learning, pages 299–314, 2002.
  • Seymour Bernard Sarason. And what do you mean by learning?, 2004.
  • Soujanya Poria, Erik Cambria, Newton Howard, Guang-Bin Huang, and Amir Hussain. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174:50–59, 2016.
  • Seyed M Ismail, DR Rahul, Indrajit Patra, and Ehsan Rezvani. Formative vs. summative assessment: impacts on academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Language Testing in Asia, 12(1):40, 2022.
  • Vandana Singh and Alexander Thurman. How many ways can we define online learning? a systematic literature review of definitions of online learning (1988-2018). American Journal of Distance Education, 33(4):289–306, 2019.
  • Sarah Ali Abdullah, Tariq Adnan Fadil, and Noor Ahmed. Online examination system (electronic learning). In Research in Intelligent and Computing in Engineering: Select Proceedings of RICE 2020, pages 309–323. Springer, 2021.
  • Doan Thi Hue Dung. The advantages and disadvantages of virtual learning. IOSR Journal of Research & Method in Education, 10(3):45–48, 2020.
  • Leisi Pei and Hongbin Wu. Does online learning work better than offline learning in undergraduate medical education? a systematic review and meta-analysis. Medical education online, 24(1):1666538, 2019.
  • Mimi M Recker, James Dorward, and Laurie Miller Nelson. Discovery and use of online learning resources: Case study findings. Journal of Educational Technology & Society, 7(2):93–104, 2004.
  • Dhawaleswar Rao Ch and Sujan Kumar Saha. Automatic multiple choice question generation from text: A survey. IEEE Transactions on Learning Technologies, 13(1):14–25, 2018.
  • Bidyut Das, Mukta Majumder, Arif Ahmed Sekh, and Santanu Phadikar. Automatic question generation and answer assessment for subjective examination. Cognitive Systems Research, 72:14–22, 2022.
  • Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1):121–204, 2020.
  • Bidyut Das, Mukta Majumder, Santanu Phadikar, and Arif Ahmed Sekh. Automatic question generation and answer assessment: a survey. Research and Practice in Technology Enhanced Learning, 16(1):1–15, 2021.
  • Frank Quansah. Traditional or performance assessment: What is the right way to assessing learners. Research on Humanities and Social Sciences, 8(1):21–24, 2018.
  • Deepankar Vishwas Kotwal, Shubham Rajendra Bhadke, Aishwarya Sanjay Gunjal, and Puspendu Biswas. Online examination system. International Research Journal of Engineering and Technology (IRJET), 3(1):115–117, 2016.
  • Bidyut Das and Mukta Majumder. Factual open cloze question generation for assessment of learner’s knowledge. International Journal of Educational Technology in Higher Education, 14(1):1–12, 2017.
  • Annamaneni Narendra, Manish Agarwal, and Rakshit Shah. Automatic cloze-questions generation. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013, pages 511–515, 2013.
  • Bidyut Das, Mukta Majumder, Santanu Phadikar, and Arif Ahmed Sekh. Automatic generation of fill-in-the-blank question with corpus-based distractors for e-assessment to enhance learning. Computer Applications in Engineering Education, 27(6):1485–1495, 2019.
  • Rui Pedro dos Santos Correia, Jorge Baptista, Nuno Mamede, Isabel Trancoso, and Maxine Eskenazi. Automatic generation of cloze question distractors. In Second language studies: acquisition, learning, education and technology, 2010.
  • Takuya Goto, Tomoko Kojiri, Toyohide Watanabe, Tomoharu Iwata, and Takeshi Yamada. Automatic generation system of multiple-choice cloze questions and its evaluation. Knowledge Management & E-Learning: An International Journal, 2(3):210–224, 2010.
  • Bidyut Das, Mukta Majumder, Santanu Phadikar, and Arif Ahmed Sekh. Multiple-choice question generation with auto-generated distractors for computer-assisted educational assessment. Multimedia Tools and Applications, 80(21):31907–31925, 2021.
  • Rakesh Patra and Sujan Kumar Saha. A hybrid approach for automatic generation of named entity distractors for multiple choice questions. Education and Information Technologies, 24(2):973–993, 2019.
  • Toma´sˇ Effenberger. Automatic question generation and adaptive practice. PhD thesis, Masarykova univerzita, Fakulta informatiky, 2015.
  • Simon Smith, PVS Avinesh, and Adam Kilgarriff. Gap-fill tests for language learners: Corpus-driven item generation. In Proceedings of ICON-2010: 8th International Conference on Natural Language Processing, pages 1–6. Macmillan Publishers India, 2010.
  • Ruslan Mitkov, Ha Le An, and Nikiforos Karamanis. A computer-aided environment for generating multiple-choice test items. Natural language engineering, 12(2):177–194, 2006.
  • Lee Becker, Sumit Basu, and Lucy Vanderwende. Mind the gap: learning to choose gaps for question generation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 742–751, 2012.
  • Andreas Papasalouros, Konstantinos Kanaris, and Konstantinos Kotis. Automatic generation of multiple choice questions from domain ontologies. e-Learning, 1:427–434, 2008.
  • PAA Dumal, WKD Shanika, SAD Pathinayake, and Thanuja Chandani Sandanayake. Adaptive and automated online assessment evaluation system. In 2017 11th International Conference on Software, Knowledge, Information Management and Applications (SKIMA), pages 1–8. IEEE, 2017.
  • Itziar Aldabe, Maddalen Lopez De Lacalle, Montse Maritxalar, Edurne Martinez, and Larraitz Uria. Arikiturri: an automatic question generator based on corpora and nlp techniques. In Intelligent Tutoring Systems: 8th International Conference, ITS 2006, Jhongli, Taiwan, June 26-30, 2006. Proceedings 8, pages 584–594. Springer, 2006.
  • Sujan Kumar Saha. Towards development of a system for automatic assessment of the quality of a question paper. Smart Learning Environments, 8(1):1–14, 2021.
  • Md Motiur Rahman and Fazlul Hasan Siddiqui. NLP-based automatic answer script evaluation. PhD thesis, DUET Journal, 2020.
  • Mingyu Feng, Neil Heffernan, and Kenneth Koedinger. Addressing the assessment challenge with an online system that tutors as it assesses. User modeling and user-adapted interaction, 19:243–266, 2009.
  • Steven Burrows, Iryna Gurevych, and Benno Stein. The eras and trends of automatic short answer grading. International Journal of Artificial Intelligence in Education, 25(1):60–117, 2015.
  • Mariano Felice, Shiva Taslimipoor, Øistein E Andersen, and Paula Buttery. Cepoc: The cambridge exams publishing open cloze dataset. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4285– 4290, 2022.
  • Sima Caspari-Sadeghi, Brigitte Forster-Heinlein, Jutta Maegdefrau, and Lena Bachl. Student-generated questions: Developing mathematical competence through online assessment. International Journal for the Scholarship of Teaching and Learning, 15(1):8, 2021.
  • Steven Stack. The impact of exam environments on student test scores in online courses. Journal of Criminal Justice Education, 26(3):273–282, 2015.
  • GR Cluskey Jr, Craig R Ehlen, and Mitchell H Raiborn. Thwarting online exam cheating without proctor supervision. Journal of Academic and Business Ethics, 4(1):1–7, 2011.
  • Kenrie Hylton, Yair Levy, and Laurie P Dringus. Utilizing webcam-based proctoring to deter misconduct in online exams. Computers & Education, 92:53–63, 2016.
  • Brendan Juba and Hai S Le. Precision-recall versus accuracy and the role of large data sets. In Proceedings of the AAAI conference on artificial intelligence, volume 33(01), pages 4039–4048, 2019.
Еще
Статья научная