Difference between revisions of "National and International Examination"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
Line 1: Line 1:
Baker et al. (2020) [[https://www.upenn.edu/learninganalytics/ryanbaker/BakerBerningGowda.pdf pdf]]
Baker et al. (2020) [https://www.upenn.edu/learninganalytics/ryanbaker/BakerBerningGowda.pdf pdf]


* Model predicting student graduation and SAT scores for military-connected students
* Model predicting student graduation and SAT scores for military-connected students
Line 6: Line 6:




Li et al. (2021) [[https://arxiv.org/pdf/2103.15212.pdf pdf]]
Li et al. (2021) [https://arxiv.org/pdf/2103.15212.pdf pdf]
*Model predicting student achievement on the standardized examination PISA
*Model predicting student achievement on the standardized examination PISA
*Inaccuracy of the U.S.-trained model was greater for students from countries with lower scores of national development (e.g. Indonesia, Vietnam, Moldova)
*Inaccuracy of the U.S.-trained model was greater for students from countries with lower scores of national development (e.g. Indonesia, Vietnam, Moldova)

Revision as of 07:24, 18 May 2022

Baker et al. (2020) pdf

  • Model predicting student graduation and SAT scores for military-connected students
  • For prediction of graduation, algorithms applying across population resulted an AUC of 0.60, degrading from their original performance of 70% or 71% to chance.
  • For prediction of SAT scores, algorithms applying across population resulted in a Spearman's ρ of 0.42 and 0.44, degrading a third from their original performance to chance.


Li et al. (2021) pdf

  • Model predicting student achievement on the standardized examination PISA
  • Inaccuracy of the U.S.-trained model was greater for students from countries with lower scores of national development (e.g. Indonesia, Vietnam, Moldova)