Difference between revisions of "Course Grade and GPA Prediction"
Jump to navigation
Jump to search
Line 3: | Line 3: | ||
* Models predicting college success (or median grade or above) | * Models predicting college success (or median grade or above) | ||
*Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students | *Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students | ||
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values | *Random forest algorithms performed significantly worse for male students than female students | ||
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values<br /> | |||
Revision as of 06:49, 18 May 2022
Lee and Kizilcec (2020) pdf
- Models predicting college success (or median grade or above)
- Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
- Random forest algorithms performed significantly worse for male students than female students
- The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values
Yu et al. (2020) pdf
- Models predicting undergraduate course grades and average GPA
- Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
- Female students were inaccurately predicted to achieve greater short-term and long-term success than male students, and fairness of models improved when a combination of institutional and click data was used in the model
Riazy et al. (2020) pdf
- Models predicting course outcome of students in a virtual learning environment (VLE)
- More male students were predicted to pass the course than female students, but this overestimation was fairly small and not consistent across different algorithms
- Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
- Students with self-declared disability were predicted to pass the course with 16-23 percentage points in favor from the training and test set