Difference between revisions of "Asian/Asian-American Learners in North America"
Jump to navigation
Jump to search
Line 6: | Line 6: | ||
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring page] | Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring page] | ||
* Automated scoring models for evaluating English essays, or e-rater | * Automated scoring models for evaluating English essays, or e-rater | ||
* E-Rater gave significantly better scores for 11th grade essays written by Hispanic students and Asian-American students than White students | * E-Rater gave significantly better scores than human rater for 11th grade essays written by Hispanic students and Asian-American students than White students | ||
Revision as of 07:29, 19 May 2022
Christie et al. (2019) pdf
- Models predicting student's high school dropout
- The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and Native Hawaiian and Pacific Islander.
Bridgeman et al. (2009) page
- Automated scoring models for evaluating English essays, or e-rater
- E-Rater gave significantly better scores than human rater for 11th grade essays written by Hispanic students and Asian-American students than White students
Lee and Kizilcec (2020) pdf
- Models predicting college success (or median grade or above)
- Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)
- The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values