Difference between revisions of "Indigenous Learners in North America"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
(Created page with "Lee and Kizilcec (2020) https://arxiv.org/pdf/2007.00088.pdf pdf * Model predicting college course grade of median or above * Out-of-the-box random forest model violates demographic parity and equality of opportunity for URM(underrepresented minority: American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than for non-URM students (White and Asian)")
 
(Added Jeong et al (2022))
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Lee and Kizilcec (2020) [[https://arxiv.org/pdf/2007.00088.pdf pdf]]
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]
* Model predicting college course grade of median or above
*Models predicting college success (or median grade or above)
* Out-of-the-box random forest model violates demographic parity and equality of opportunity for URM(underrepresented minority: American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than for non-URM students (White and Asian)
*Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)
*The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values
 
 
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]
*Models predicting student's high school dropout
*The decision trees showed little difference in AUC among American Indian and Alaska Native, White, Black, Hispanic, Asian, and  Native Hawaiian and Pacific Islander.
*The decision trees showed very minor differences in AUC between female and male students
 
 
Jiang & Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]
* Predicting university course grades using LSTM
* Roughly equal accuracy across racial groups (including Native American and Pacific Islander students)
* Slightly better accuracy (~1%) across racial groups when including race in model
 
 
Jeong et al. (2022) [https://fated2022.github.io/assets/pdf/FATED-2022_paper_Jeong_Racial_Bias_ML_Algs.pdf]
* Predicting 9th grade math score from academic performance, surveys, and demographic information
* Despite comparable accuracy, model tends to underpredict Native American students' performance
* Several fairness correction methods equalize false positive and false negative rates across groups.

Latest revision as of 15:05, 4 August 2022

Lee and Kizilcec (2020) pdf

  • Models predicting college success (or median grade or above)
  • Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)
  • The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values


Christie et al. (2019) pdf

  • Models predicting student's high school dropout
  • The decision trees showed little difference in AUC among American Indian and Alaska Native, White, Black, Hispanic, Asian, and Native Hawaiian and Pacific Islander.
  • The decision trees showed very minor differences in AUC between female and male students


Jiang & Pardos (2021) pdf

  • Predicting university course grades using LSTM
  • Roughly equal accuracy across racial groups (including Native American and Pacific Islander students)
  • Slightly better accuracy (~1%) across racial groups when including race in model


Jeong et al. (2022) [1]

  • Predicting 9th grade math score from academic performance, surveys, and demographic information
  • Despite comparable accuracy, model tends to underpredict Native American students' performance
  • Several fairness correction methods equalize false positive and false negative rates across groups.