Difference between revisions of "Automated Essay Scoring"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
Line 31: Line 31:


* Revised automated scoring engine for assessing GSE essay
* Revised automated scoring engine for assessing GSE essay
*Relative weakness in content and organization by African American test takers resulted in lower scores than Chinese peers who wrote longer
 
* E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts
* The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization
 





Revision as of 12:46, 16 May 2022

Bridgeman, Trapani, and Attali (2009) [pdf]

  • E-rater gave significantly higher score for 11th grade essays written by Asian American and Hispanic students, particularly, Hispanic female students
  • The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students.
  • E-rater gave slightly lower score for GRE essays (argument and issue) written by Black test-takers while e-rated scores were higher for Asian test-takers in the U.S
  • E-rater gave significantly higher score for students from China and South Korea than 14 other countries when assessing independent writing task in Test of English as a Foreign Language (TOEFL)
  • E-rater gave slightly higher scores for GRE analytical writing, both argument and issue prompts, by students from China whose written responses tended to be the longest and below average on grammar, usage and mechanics
  • E-rater performed accurately for male and female students when assessing 11th grade English essays and independent writing task in Test of English as a Foreign Language
  • While feature-level score differences were identified across gender and ethnic groups (e.g. e-rater gave better scores for word length and vocabulary level but less on grammar and mechanics when grading 11th grade essays written by Asian American female students), the authors called for larger samples to confirm the findings


Bridgeman, Trapani, and Attali (2012) pdf

  • A later version of automated scoring models for evaluating English essays, or e-rater
  • E-rater gave particularly lower score for African-American, and American-Indian males, when assessing written responses to issue prompt in GRE
  • The score was significantly lower when e-rater was assessing GRE written responses to argument prompt by African-American test-takers, both males and females.
  • E-rater gave slightly higher scores for test-takers from Chinese speakers (Mainland China, Taiwan, Hong Kong) and Korean speakers when assessing written responses to independent prompt in Test of English as a Foreign Language (TOEFL)
  • E-rater gave slightly lower scores for Arabic, Hindi, and Spanish speakers when assessing their written responses to independent prompt in TOEFL
  • E-rater gave significantly higher scores for test-takers from Mainland China than from Taiwan, Korea and Japan when assessing their GRE writings which tended to be below average on grammar, usage, and mechanics but longest response
  • The score difference between human rater and e-rater was marginal when written responses to GRE issue prompt by male and female test-takers were compared
  • The difference in score was significantly greater when assessing written responses to GRE argument prompt, as e-rater gave lower score for male test-takers, particularly for African American, American Indian, and Hispanic males, when assessing written responses to GRE argument prompt


Ramineni & Williamson (2018) pdf

  • Revised automated scoring engine for assessing GSE essay
  • E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts
  • The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization



Wang et al. (2018) pdf

  • Automated scoring model for evaluating English spoken responses
  • SpeechRater gave a significantly lower score than human raters for German
  • SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean