Difference between revisions of "Asian/Asian-American Learners in North America"
Jump to navigation
Jump to search
Line 9: | Line 9: | ||
* The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students | * The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students | ||
* E-rater gave slightly lower score for GRE essays (argument and issue) written by Black test-takers while e-rated scores were higher for Asian test-takers in the U.S | * E-rater gave slightly lower score for GRE essays (argument and issue) written by Black test-takers while e-rated scores were higher for Asian test-takers in the U.S | ||
Lee and Kizilcec (2020) [[https://arxiv.org/pdf/2007.00088.pdf](https://arxiv.org/pdf/2007.00088.pdf) pdf] | |||
* Models predicting college success (or median grade or above) | |||
* Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian) | |||
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values |
Revision as of 06:46, 18 May 2022
Christie et al. (2019) pdf
- Models predicting student's high school dropout
- The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and Native Hawaiian and Pacific Islander.
Bridgeman et al. (2009) pdf
- Automated scoring models for evaluating English essays, or e-rater
- E-rater gave significantly higher score for 11th grade essays written by Asian American and Hispanic students, particularly, Hispanic female students
- The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students
- E-rater gave slightly lower score for GRE essays (argument and issue) written by Black test-takers while e-rated scores were higher for Asian test-takers in the U.S
Lee and Kizilcec (2020) [[1](https://arxiv.org/pdf/2007.00088.pdf) pdf]
- Models predicting college success (or median grade or above)
- Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)
- The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values