Speech Recognition for Education
Jump to navigation
Jump to search
Wang et al. (2018) [pdf]
- Automated scoring model for evaluating English spoken responses
- SpeechRater gave a significantly lower score than human raters for German
- SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean
Loukina & Buzick (2017) [pdf]
- a model (the SpeechRater) automatically scoring open-ended spoken responses for speakers with documented or suspected speech impairments
- SpeechRater was less accurate for test takers who were deferred for signs of speech impairment (ρ2 = .57) than test takers who were given accommodations for documented disabilities (ρ2 = .73)
Loukina et al. (2019) [pdf]
- Models providing automated speech scores on English language proficiency assessment
- L1-specific model trained on the speaker’s native language was the least fair, especially for Chinese, Japanese, and Korean speakers, but not for German speakers
- All models (Baseline, Fair feature subset, L1-specific) performed disadvantageously for Japanese speakers