Abstract
The study examines the process of assessing vocabulary in oral proficiency examinations. Vocabulary is increasingly adopted as an effective indicator of candidates' oral proficiency in large-scale tests, but there is limited empirical evidence so far regarding how raters assess it. In this experiment, 25 participants rated one English oral text produced by a candidate with Chinese as a first language. Raters' verbal protocols were transcribed and coded to identify what raters attended to in assessing vocabulary. The candidate's use of 'advanced' words was found to have a direct impact on vocabulary scores. Also, both vocabulary and non-vocabulary features emerged in the raters' protocols. The findings question the possibility of assessing vocabulary as a discrete construct.
Original language | English |
---|---|
Pages (from-to) | 1-13 |
Number of pages | 13 |
Journal | System |
Volume | 46 |
Issue number | 1 |
DOIs | |
Publication status | Published - Oct 2014 |
Keywords
- Analytic rating scales
- Oral proficiency
- Rater protocols
- Vocabulary assessment