Speech Recognition for Education

From Penn Center for Learning Analytics Wiki
Revision as of 05:09, 10 June 2022 by Ryan (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Wang et al. (2018) pdf

  • Automated scoring model for evaluating English spoken responses
  • SpeechRater gave a significantly lower score than human raters for German students
  • SpeechRater gave higher scores to students from China than human raters, with H1-rater scores higher than mean


  Loukina & Buzick (2017) pdf

  • a model (the SpeechRater) automatically scoring open-ended spoken responses for speakers with documented or suspected speech impairments
  • SpeechRater was less accurate for test takers who were deferred for signs of speech impairment (ρ2 = .57) than test takers who were given accommodations for documented disabilities (ρ2 = .73)


Loukina et al. (2019) pdf

  • Models providing automated speech scores on English language proficiency assessment
  • L1-specific model trained on the speaker’s native language was the least fair, especially for Chinese, Japanese, and Korean speakers, but not for German speakers
  • All models (Baseline, Fair feature subset, L1-specific) performed worse for Japanese speakers