Difference between revisions of "Course Grade and GPA Prediction"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
(Add Svabensky@EDM'24)
 
(19 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Hu and Rangwala (2020) [https://files.eric.ed.gov/fulltext/ED608050.pdf pdf]
Švábenský et al. (2024) [https://educationaldatamining.org/edm2024/proceedings/2024.EDM-posters.82/2024.EDM-posters.82.pdf pdf]
*Models predicting if a college student will fail in a course
 
*Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against African-American students, while other models (particularly Logistic Regression and Rawlsian Fairness) performed far worse
* Classification models for predicting grades (worse than an average grade, “unsuccessful”, or equal/better than an average grade, “successful”)
*The level of bias was inconsistent across courses, with MCCM prediction showing the least bias for Psychology and the greatest bias for Computer Science
* Investigating bias based on university students' regional background in the context of the Philippines
Lee and Kizilcec (2020) [[https://arxiv.org/pdf/2007.00088.pdf pdf]]
* Demographic groups based on 1 of 5 locations from which students accessed online courses in Canvas
* Model predicting college course grade of median or above
* Bias evaluation using AUC, weighted F1-score, and MADD showed consistent results across all groups, no unfairness was observed
* Out-of-the-box random forest model violates demographic parity and equality of opportunity for URM (underrepresented minority: American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than for non-URM students (White and Asian)
 
Yu et al. (2020) [[https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]]
 
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]
 
* Models predicting college success (or median grade or above)
*Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
*Random forest algorithms performed significantly worse for male students than female students
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values<br />
 
 
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]


* Models predicting undergraduate course grades and average GPA
* Models predicting undergraduate course grades and average GPA


* Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peers
* Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
* Fairness of models improved with the inclusion of clickstream and survey data
*Female students were inaccurately predicted to achieve greater short-term and long-term success than male students, and  fairness of models improved when a combination of institutional and click data was used in the model
Riazy et al. (2020) [[https://www.scitepress.org/Papers/2020/93241/93241.pdf pdf]]
 
 
 
Riazy et al. (2020) [https://www.scitepress.org/Papers/2020/93241/93241.pdf pdf]


* Models predicting course outcome of students in a virtual learning environment (VLE)
* Models predicting course outcome of students in a virtual learning environment (VLE)
* Students with self-declared disability were predicted to pass the course with 16-23 percentage points in favor from the training and test set
* More male students were predicted to pass the course than female students, but  this overestimation was fairly small and not consistent across different algorithms
*Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
* Students with self-declared disability were predicted to pass the course more often
 
 
Jiang & Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]
* Predicting university course grades using LSTM
* Roughly equal accuracy across racial groups
* Slightly better accuracy (~1%) across racial groups when including race in model
 
 
Kung & Yu (2020)
[https://dl.acm.org/doi/pdf/10.1145/3386527.3406755 pdf]
* Predicting course grades and later GPA at public U.S. university
* Five algorithms and three metrics (independence, separation, sufficiency) analyzed
* Poorer performance for Latinx students on course grade prediction for all three metrics; poorer performance for Latinx students on GPA prediction in terms of independence and sufficiency, but not separation
* Poorer performance for first-generation students on course grade prediction for independence and separation, and for some algorithms for GPA prediction as well
* Poorer performance for low-income students in several cases, about 1/3 of cases checked
 
 
Jeong et al. (2022) [https://fated2022.github.io/assets/pdf/FATED-2022_paper_Jeong_Racial_Bias_ML_Algs.pdf]
* Predicting 9th grade math score from academic performance, surveys, and demographic information
* Despite comparable accuracy, model tends to overpredict Asian and White students' performance, and underpredict Black, Hispanic, and Native American students' performance
* Several fairness correction methods equalize false positive and false negative rates across groups.
 
 
Sha et al. (2022) [https://ieeexplore.ieee.org/abstract/document/9849852]
* Predicting course pass/fail with random forest in Open University data
* A range of over-sampling methods tested
* Regardless of over-sampling method used, course pass/fail performance was moderately better for males
 
 
Deho et al. (2023) [https://files.osf.io/v1/resources/5am9z/providers/osfstorage/63eaf170a3fade041fe7c9db?format=pdf&action=download&direct&version=1]
* Predicting whether course grade will be above or below 0.5
* Better prediction for female students in some courses, better prediction for male students in other courses
* Generally worse prediction for international students

Latest revision as of 19:06, 1 September 2024

Švábenský et al. (2024) pdf

  • Classification models for predicting grades (worse than an average grade, “unsuccessful”, or equal/better than an average grade, “successful”)
  • Investigating bias based on university students' regional background in the context of the Philippines
  • Demographic groups based on 1 of 5 locations from which students accessed online courses in Canvas
  • Bias evaluation using AUC, weighted F1-score, and MADD showed consistent results across all groups, no unfairness was observed


Lee and Kizilcec (2020) pdf

  • Models predicting college success (or median grade or above)
  • Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
  • Random forest algorithms performed significantly worse for male students than female students
  • The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values


Yu et al. (2020) pdf

  • Models predicting undergraduate course grades and average GPA
  • Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
  • Female students were inaccurately predicted to achieve greater short-term and long-term success than male students, and fairness of models improved when a combination of institutional and click data was used in the model


Riazy et al. (2020) pdf

  • Models predicting course outcome of students in a virtual learning environment (VLE)
  • More male students were predicted to pass the course than female students, but  this overestimation was fairly small and not consistent across different algorithms
  • Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
  • Students with self-declared disability were predicted to pass the course more often


Jiang & Pardos (2021) pdf

  • Predicting university course grades using LSTM
  • Roughly equal accuracy across racial groups
  • Slightly better accuracy (~1%) across racial groups when including race in model


Kung & Yu (2020) pdf

  • Predicting course grades and later GPA at public U.S. university
  • Five algorithms and three metrics (independence, separation, sufficiency) analyzed
  • Poorer performance for Latinx students on course grade prediction for all three metrics; poorer performance for Latinx students on GPA prediction in terms of independence and sufficiency, but not separation
  • Poorer performance for first-generation students on course grade prediction for independence and separation, and for some algorithms for GPA prediction as well
  • Poorer performance for low-income students in several cases, about 1/3 of cases checked


Jeong et al. (2022) [1]

  • Predicting 9th grade math score from academic performance, surveys, and demographic information
  • Despite comparable accuracy, model tends to overpredict Asian and White students' performance, and underpredict Black, Hispanic, and Native American students' performance
  • Several fairness correction methods equalize false positive and false negative rates across groups.


Sha et al. (2022) [2]

  • Predicting course pass/fail with random forest in Open University data
  • A range of over-sampling methods tested
  • Regardless of over-sampling method used, course pass/fail performance was moderately better for males


Deho et al. (2023) [3]

  • Predicting whether course grade will be above or below 0.5
  • Better prediction for female students in some courses, better prediction for male students in other courses
  • Generally worse prediction for international students