Course Grade and GPA Prediction

From Penn Center for Learning Analytics Wiki
Revision as of 21:15, 22 March 2022 by Seiyon (talk | contribs)
Jump to navigation Jump to search

Lee and Kizilcec (2020) [pdf]

  • Models predicting college success (or median grade or above)
  • Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
  • The fairness of the model for URM and male students, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values


Yu et al. (2020) [pdf]

  • Model predicting undergraduate short-term (course grades) and long-term (average GPA) success
  • Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
  • Female students were inaccurately predicted to achieve greater short-term and long-term success than male students, and fairness of models improved when a combination of institutional and click data was used in the model


Riazy et al. (2020) [pdf]

  • Models predicting course outcome of students in a virtual learning environment (VLE)
  • Slightly more male students were predicted to pass the course than female students, but this overestimation was not consistent across different algorithms.
  • Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
  • Students with self-declared disability were predicted to pass the course with 16-23 percentage points in favor from the training and test set