Automated Essay Scoring
Jump to navigation
Jump to search
Bridgeman et al. (2009) pdf
- Automated scoring models for evaluating English essays, or e-rater
- E-Rater gave significantly better scores for 11th grade essays written by Hispanic students and Asian-American students than White students
- E-Rater gave significantly better scores for TOEFL essays (independent task) written by speakers of Chinese and Korean
- E-Rater correlated poorly with human rater and give better scores for GRE essays (both issue and argument prompts) written by Chinese speakers
- E-Rater system performed comparably accurately for male and female students when assessing their 11th grade essays, TOEFL, and GRE writings
Bridgeman et al. (2012) pdf
- A later version of automated scoring models for evaluating English essays, or e-rater
- E-rater gave significantly lower score than human rater when assessing African-American students’ written responses to issue prompt in GRE
Ramineni & Williamson (2018) pdf
- Revised automated scoring engine for assessing GSE essay
- E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts
- The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization
Wang et al. (2018) pdf
- Automated scoring model for evaluating English spoken responses
- SpeechRater gave a significantly lower score than human raters for German
- SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean