Evaluating an AI speaking assessment tool: Score accuracy, perceived validity, and oral peer feedback as feedback enhancement

Xu Liu*, Jingwen Wang*, Bin Zou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Artificial Intelligence (AI) has significantly transformed language learning approaches and outcomes. However, research on AI-assisted English for Academic Purposes (EAP) speaking classrooms remains sparse. This study evaluates "EAP Talk", an AI-assisted speaking assessment tool, examining its effectiveness in two contexts: controlled tasks (Reading Aloud) that elicit non-spontaneous speech, and uncontrolled tasks (Presentation) that generate spontaneous speech. The research assessed accuracy and validity of EAP Talk scores through analysing 20 Reading Aloud and 20 Presentation recordings randomly selected from a pool of 64 undergraduate students. These recordings were graded by five experienced EAP teachers using Adaptive Comparative Judgment (ACJ) – a comparative scoring method – and the traditional rubric rating approach. Acknowledging the limitation of EAP Talk in providing scores without detailed feedback, the study further investigated its perceived validity and examined oral peer feedback as a complementary enhancement strategy. Semi-structured interviews with four students were conducted to investigate their perceptions of the AI-assisted assessment process, focusing on the benefits of EAP Talk in enhancing learning, its limitations, and the effectiveness of oral peer feedback. Scoring concordance analysis shows that EAP Talk performs well in the controlled task but less so in the uncontrolled one. Content analysis on the interview data reveals that EAP Talk facilitates student confidence and positively shapes learning styles, while oral peer feedback markedly improves speaking skills through effective human-computer collaboration. The study calls for more precise AI assessments in uncontrolled tasks and proposes pedagogical strategies to better integrate AI into EAP speaking contexts.

Original languageEnglish
Article number101505
JournalJournal of English for Academic Purposes
Volume75
DOIs
Publication statusPublished - May 2025

Keywords

  • AI-assisted speaking assessment
  • Adaptive Comparative Judgment (ACJ)
  • English for Academic Purposes (EAP)
  • Human-computer collaboration
  • Oral peer feedback

Fingerprint

Dive into the research topics of 'Evaluating an AI speaking assessment tool: Score accuracy, perceived validity, and oral peer feedback as feedback enhancement'. Together they form a unique fingerprint.

Cite this