Difference between revisions of "Speech Recognition for Education"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
Line 3: Line 3:
*SpeechRater gave a significantly lower score than human raters for German
*SpeechRater gave a significantly lower score than human raters for German
*SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean
*SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean
  Loukina & Buzick (2017) [[https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ets2.12170 pdf]]
*a model (the SpeechRater) automatically scoring open-ended spoken responses for speakers with documented or suspected speech impairments
*SpeechRater was less accurate for test takers who were deferred for signs of speech impairment (ρ<sup>2</sup> = .57) than test takers who were given accommodations for documented disabilities (ρ<sup>2</sup> = .73)

Revision as of 13:10, 15 February 2022

Wang et al. (2018) [pdf]

  • Automated scoring model for evaluating English spoken responses
  • SpeechRater gave a significantly lower score than human raters for German
  • SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean


  Loukina & Buzick (2017) [pdf]

  • a model (the SpeechRater) automatically scoring open-ended spoken responses for speakers with documented or suspected speech impairments
  • SpeechRater was less accurate for test takers who were deferred for signs of speech impairment (ρ2 = .57) than test takers who were given accommodations for documented disabilities (ρ2 = .73)