Difference between revisions of "Gender: Male/Female"
Jump to navigation
Jump to search
Line 20: | Line 20: | ||
Gardner, Brooks and Baker (2019) | Gardner, Brooks and Baker (2019) [https://www.upenn.edu/learninganalytics/ryanbaker/LAK_PAPER97_CAMERA.pdf pdf] | ||
* Model predicting MOOC dropout, specifically through slicing analysis | * Model predicting MOOC dropout, specifically through slicing analysis | ||
* Some algorithms studied performed worse for female students than male students, particularly in courses with 45% or less male presence | * Some algorithms studied performed worse for female students than male students, particularly in courses with 45% or less male presence | ||
Riazy et al. (2020) | Riazy et al. (2020) [https://www.scitepress.org/Papers/2020/93241/93241.pdf pdf] | ||
* Model predicting course outcome | * Model predicting course outcome | ||
* Marginal differences were found for prediction quality and in overall proportion of predicted pass between groups | * Marginal differences were found for prediction quality and in overall proportion of predicted pass between groups | ||
Line 31: | Line 31: | ||
Lee and Kizilcec (2020) | Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf] | ||
* Models predicting college success (or median grade or above) | * Models predicting college success (or median grade or above) | ||
* Random forest algorithms performed significantly worse for male students than female students | * Random forest algorithms performed significantly worse for male students than female students | ||
Line 37: | Line 37: | ||
Yu et al. (2020) | Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf] | ||
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success | * Model predicting undergraduate short-term (course grades) and long-term (average GPA) success | ||
* Female students were inaccurately predicted to achieve greater short-term and long-term success than male students. | * Female students were inaccurately predicted to achieve greater short-term and long-term success than male students. | ||
Line 43: | Line 43: | ||
Yu and colleagues (2021) | Yu and colleagues (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf] | ||
* Models predicting college dropout for students in residential and fully online program | * Models predicting college dropout for students in residential and fully online program | ||
* Whether the protected attributed were included or not, the models had worse true negative rates but better recall for male students | * Whether the protected attributed were included or not, the models had worse true negative rates but better recall for male students | ||
Line 56: | Line 56: | ||
Bridgeman et al. (2009) | Bridgeman et al. (2009) | ||
[https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf] | |||
* Automated scoring models for evaluating English essays, or e-rater | * Automated scoring models for evaluating English essays, or e-rater | ||
* E-rater performed accurately for male and female students when assessing 11th grade English essays and independent writing task in Test of English as a Foreign Language | * E-rater performed accurately for male and female students when assessing 11th grade English essays and independent writing task in Test of English as a Foreign Language | ||
* While feature-level score differences were identified across gender and ethnic groups (e.g. e-rater gave better scores for word length and vocabulary level but less on grammar and mechanics when grading 11th grade essays written by Asian American female students), the authors called for larger samples to confirm the findings | * While feature-level score differences were identified across gender and ethnic groups (e.g. e-rater gave better scores for word length and vocabulary level but less on grammar and mechanics when grading 11th grade essays written by Asian American female students), the authors called for larger samples to confirm the findings |
Revision as of 22:53, 11 May 2022
Kai et al. (2017) pdf
- Models predicting student retention in an online college program
- J48 decision trees achieved significantly lower Kappa but higher AUC for male students than female students
- JRip decision rules achieved much lower Kappa and AUC for male students than female students
Christie et al. (2019) pdf
- Models predicting student's high school dropout
- The decision trees showed very minor differences in AUC between female and male students
Hu and Rangwala (2020) pdf
- Models predicting if a college student will fail in a course
- Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against male students, performing particularly better for Psychology course.
- Other models (Logistic Regression and Rawlsian Fairness) performed far worse for male students, performing particularly worse in Computer Science and Electrical Engineering.
Anderson et al. (2019) pdf
- Models predicting six-year college graduation
- False negatives rates were greater for male students than female students when SVM, Logistic Regression, and SGD were used
Gardner, Brooks and Baker (2019) pdf
- Model predicting MOOC dropout, specifically through slicing analysis
- Some algorithms studied performed worse for female students than male students, particularly in courses with 45% or less male presence
Riazy et al. (2020) pdf
- Model predicting course outcome
- Marginal differences were found for prediction quality and in overall proportion of predicted pass between groups
- Inconsistent in direction between algorithms.
Lee and Kizilcec (2020) pdf
- Models predicting college success (or median grade or above)
- Random forest algorithms performed significantly worse for male students than female students
- The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values
Yu et al. (2020) pdf
- Model predicting undergraduate short-term (course grades) and long-term (average GPA) success
- Female students were inaccurately predicted to achieve greater short-term and long-term success than male students.
- The fairness of models improved when a combination of institutional and click data was used in the model
Yu and colleagues (2021) pdf
- Models predicting college dropout for students in residential and fully online program
- Whether the protected attributed were included or not, the models had worse true negative rates but better recall for male students
- The model was worse for male students studying in online program in terms of true negative rates, recall and accuracy.
Riazy et al. (2020) pdf
- Models predicting course outcome of students in a virtual learning environment (VLE)
- More male students were predicted to pass the course than female students, but this overestimation was fairly small and not consistent across different algorithms
- Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value
Bridgeman et al. (2009)
pdf
- Automated scoring models for evaluating English essays, or e-rater
- E-rater performed accurately for male and female students when assessing 11th grade English essays and independent writing task in Test of English as a Foreign Language
- While feature-level score differences were identified across gender and ethnic groups (e.g. e-rater gave better scores for word length and vocabulary level but less on grammar and mechanics when grading 11th grade essays written by Asian American female students), the authors called for larger samples to confirm the findings