Automated Scoring Of Spontaneous Speech From Young Learners Of English Using Transformers
Xinhao Wang, Keelan Evanini, Yao Qian, Matthew Mulholland
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 0:12:43
This study explores the use of Transformer-based models for the automated assessment of children's non-native spontaneous speech. Traditional approaches for this task have relied heavily on delivery features (e.g., fluency), whereas the goal of the current study is to build automated scoring models based solely on transcriptions in order to see how well they capture additional aspects of speaking proficiency (e.g., content appropriateness, vocabulary, and grammar) despite the high word error rate (WER) of automatic speech recognition (ASR) on children's non-native spontaneous speech. Transformer-based models are built using both manual transcriptions and ASR hypotheses, and versions of the models that incorporated the prompt text were investigated in order to more directly measure content appropriateness. Two baseline systems were used for comparison, including an attention-based Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) and a Support Vector Regressor (SVR) with manually engineered content-related features. Experimental results demonstrate the effectiveness of the Transformer-based models: the automated prompt-aware model using ASR hypotheses achieves a Pearson correlation coefficient (r) with holistic proficiency scores provided by human experts of 0.835, outperforming both the attention-based RNN-LSTM baseline (r = 0.791) and the SVR baseline (r = 0.767).