Difference between revisions of "White Learners in North America"
Jump to navigation
Jump to search
(clarification) |
(Added Jiang & Pardos) |
||
Line 1: | Line 1: | ||
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf] | Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf] | ||
* Automated scoring models for evaluating English essays, or e-rater | * Automated scoring models for evaluating English essays, or e-rater | ||
* The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students | * The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students than other groups | ||
than other groups | |||
Jiang & Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf] | |||
* Predicting university course grades using LSTM | |||
* Roughly equal accuracy across racial groups | |||
* Slightly better accuracy (~1%) across racial groups when including race in model |
Revision as of 08:02, 21 May 2022
Bridgeman et al. (2009) pdf
- Automated scoring models for evaluating English essays, or e-rater
- The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students than other groups
Jiang & Pardos (2021) pdf
- Predicting university course grades using LSTM
- Roughly equal accuracy across racial groups
- Slightly better accuracy (~1%) across racial groups when including race in model