Difference between revisions of "Speech Recognition for Education"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
Line 1: Line 1:
Wang et al. (2018) [[https://www.researchgate.net/publication/336009443_Monitoring_the_performance_of_human_and_automated_scores_for_spoken_responses pdf]]
Wang et al. (2018) [[https://www.researchgate.net/publication/336009443_Monitoring_the_performance_of_human_and_automated_scores_for_spoken_responses page]]
* SpeechRater system for evaluating communicative competence in English  
* Automated scoring model for evaluating English spoken responses
*performance particularly low for native speakers of German and Telugu
*SpeechRater gave a significantly lower score than human raters for German  
*systematically bias upwards for Chinese students and downwards for German students
*SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean

Revision as of 04:33, 24 January 2022

Wang et al. (2018) [page]

  • Automated scoring model for evaluating English spoken responses
  • SpeechRater gave a significantly lower score than human raters for German
  • SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean