A comparison of computer assisted and face-to-face speaking assessment: performance, perceptions, anxiety, and computer attitudes
Автор: Alikulova S.A.
Журнал: Экономика и социум @ekonomika-socium
Рубрика: Основной раздел
Статья в выпуске: 11 (78), 2020 года.
Бесплатный доступ
This paper reports on the on-going evaluation of a computer-assisted system (CEOTS) for the assessing of spoken English skills among Chinese university students. This system is being developed to deal with the negative backwash effects of the present system of assessment of speaking skills which is only available to a tiny minority. We present data from a survey of students at the developing institution (USTC), with follow-up interviews and further interviews with English language teachers, to gauge the reactions to the test and its impact on language learning. We identify the key issue as being one of validity, with a tension existing between construct and consequential validities of the existing system and of CEOTS. We argue that a computer-based system seems to offer the only solution to the negative backwash problem but the development of the technology required to meet current construct validity demands makes this a very long term prospect. We suggest that a compromise between the competing forms of validity must therefore be accepted, probably well before a computer-based system can deliver the level of interaction with the examinees that would emulate the present face-to-face mode.
Computer attitudes, development, speaking, face-to-face, technology
Короткий адрес: https://sciup.org/140251715
IDR: 140251715
Текст научной статьи A comparison of computer assisted and face-to-face speaking assessment: performance, perceptions, anxiety, and computer attitudes
Reflecting the Chinese governments' determination to promote the teaching and learning of English among its citizens, all Chinese university students must now pass the College English Test (CET) at Band 4 level (or an equivalent test) as part of their degree programme. With over ten million candidates annually (and rising) CET Band 4 has become the world's largest language test administered nationwide [1]. The English speaking component of this test (CET-SET) is, however, only available to a tiny minority of students, because of the adoption of a direct, face-to-face testing mode: over 99% of those taking Band 4 written papers are not taking a test of spoken English. The backwash implications of this are clear: neither among students learning English nor among teachers is there an emphasis on the development of spoken English proficiency. This conflicts with the College English Curriculum objective
This paper reports on a joint study by USTC, SFLEP and the University of Bath, to carry out a wide-reaching evaluation of various aspects of this system and the possibility that it may offer an alternative to the current CET-SET that will open up the testing of speaking competence to the majority of students. This study is on- going and this paper reports on the issues, aims and approaches to the evaluation, and some initial results. We recognise that our findings may also have more generic implications for the use of computer-assisted English speaking tests, particularly with regard to the promotion of spoken English in Chinese universities Through a comparison with face-to-face tests, our study investigates the reliability, validity, efficiency, management, social consequences and backwash on
Computer technology has long been applied to language testing as a time and cost efficient way to conveniently assess the proficiency of large numbers of students. Thus, a good deal of research have focused on the effect and efficiency of computer assisted (semi-direct) assessment in evaluating different constructs of the language. Nonetheless, little research has been conducted to compare computer assisted and face-to-face (direct) formats to find whether the two modes yield similar results in oral assessment and whether one is advantageous over the other. Even less investigated were the possible outcomes of administration of computer-assessed speaking tests on a local basis, as achievement tests. The purpose of this exploratory study is to fill the abovementioned gap via examining the relationships between a number of variables. Presented in the thesis are the relationships between test scores obtained in two different test modes at two different proficiency levels, the students’ perceptions of the test modes, and their anxiety levels with regard to speaking in a foreign language, speaking tests, and using computers. Data were collected through four computer assisted and four face-to-face speaking assessments, a questionnaire on Computer Assessed Speaking Assessment (CASA) perceptions and another on Face-to-face Speaking Assessment (FTF)
perceptions, a speaking test and speaking anxiety questionnaire, and a computer familiarity questionnaire. A total of 66 learners of English at tertiary level and four instructors of English participated in the study which was conducted at University School of Foreign Languages. The quantitative and qualitative data analyses revealed that the two test modes give very different rankings to the students, and the students’ perceptions of the test modes, which have been found to be more positive about the FTF at both proficiency levels, are not strongly related to their performance in the speaking tests. The relationship between different types of anxiety mentioned above and test scores are only weakly related to the test scores and the degree of the relationships vary depending on the proficiency level. The results of this study are hoped to be beneficial to the language assessors, instructors, and institutions and researchers that are into language assessment.
Conclusion:
At the national level, therefore, action to counter the negative backwash effect of CET on spoken English must comprise both a compulsory test of speaking skills for all students and that this test be given significant weighting in determining a student's overall grade. The problem of meeting the first of these requirements was what opened this paper and what led to the development of CEOTS as a means of mass testing speaking skills. The second requirement can in practice only be met if an acceptable standard of reliability of the grading can be achieved. This was shown to be a problem with CEOTS as it is presently managed, largely because of the tedious nature of the process for the graders. In principle this does not differ from the process of grading essay type examination papers, however, and presumably moderation processes similar to those used in essay marking could be put in place. In the long term, the ideal would be to have the grading also carried out by computer. This is already available in some publically available tests for a limited range of speaking skills and criteria for judging them. USTC and its partners are already experimenting with computer- based grading of the reading out loud of a set passage. It is recognised, however, that our present level of technology and software development do not allow reliable grading across the range of criteria that might be demanded for a valid test of the sort CET is intended to be, and which can cope with the range of speaking competencies that would be expected. The other aspect of developing adequate levels of grading reliability is the clear specification of criteria for this grading. Criteria do exist for CET testing, at least in the form of the specification of domains to be considered (flexibility, appropriateness, coherence, accuracy, size and range) and the assessment of these is clearly beyond the present limits of our computer technology.
"Экономика и социум" №11(78) 2020
Список литературы A comparison of computer assisted and face-to-face speaking assessment: performance, perceptions, anxiety, and computer attitudes
- Y. Jin and H.Z. Yang, "The English proficiency of college and university students in China: as reflected in the CET", Language, Culture and Curriculum, Vol. 19, No. 1, pp. 21-36, 2006.
- L. Bachman, L. and A. Palmer, Language Testing in Practice, Oxford: Oxford University Press, 1996
- Caroline Gipps, Beyond Testing, London: Falmer Press, 1994.
- R.M. Wolf, "Validity issues in international assessments", International Journal of Educational Research, Vol. 29, No. 5, pp. 491-501, 1998