Difference between revisions of "Asian/Asian-American Learners in North America"
Jump to navigation
Jump to search
(Added Jeong et al (2022)) |
|||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf] | Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf] | ||
* Models predicting student's high school dropout | * Models predicting student's high school dropout | ||
* The decision trees showed little difference in AUC among White, Black, Hispanic | * The decision trees showed little difference in AUC among Asian, White, Black, Hispanic, American Indian and Alaska Native, and Native Hawaiian and Pacific Islander. | ||
Line 12: | Line 12: | ||
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf] | Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf] | ||
* Models predicting college success (or median grade or above) | * Models predicting college success (or median grade or above) | ||
* Random forest algorithms performed significantly | * Random forest algorithms performed significantly better for non-URM students (Asian and White) than for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) | ||
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values | * The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values | ||
Line 20: | Line 20: | ||
* Roughly equal accuracy across racial groups | * Roughly equal accuracy across racial groups | ||
* Slightly better accuracy (~1%) across racial groups when including race in model | * Slightly better accuracy (~1%) across racial groups when including race in model | ||
Jeong et al. (2022) [https://fated2022.github.io/assets/pdf/FATED-2022_paper_Jeong_Racial_Bias_ML_Algs.pdf] | |||
* Predicting 9th grade math score from academic performance, surveys, and demographic information | |||
* Despite comparable accuracy, model tends to overpredict Asian students' performance | |||
* Several fairness correction methods equalize false positive and false negative rates across groups. |
Latest revision as of 15:03, 4 August 2022
Christie et al. (2019) pdf
- Models predicting student's high school dropout
- The decision trees showed little difference in AUC among Asian, White, Black, Hispanic, American Indian and Alaska Native, and Native Hawaiian and Pacific Islander.
Bridgeman et al. (2009) page
- Automated scoring models for evaluating English essays, or e-rater
- E-Rater gave significantly better scores than human rater for 11th grade essays written by Hispanic students and Asian-American students
Lee and Kizilcec (2020) pdf
- Models predicting college success (or median grade or above)
- Random forest algorithms performed significantly better for non-URM students (Asian and White) than for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural)
- The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values
Jiang & Pardos (2021) pdf
- Predicting university course grades using LSTM
- Roughly equal accuracy across racial groups
- Slightly better accuracy (~1%) across racial groups when including race in model
Jeong et al. (2022) [1]
- Predicting 9th grade math score from academic performance, surveys, and demographic information
- Despite comparable accuracy, model tends to overpredict Asian students' performance
- Several fairness correction methods equalize false positive and false negative rates across groups.