Difference between revisions of "Other NLP Applications of Algorithms in Education"
Jump to navigation
Jump to search
(Created page with "Naismith et al. (2018) http://d-scholarship.pitt.edu/40665/1/EDM2018_paper_37.pdf pdf * a model that measures L2 learners’ lexical sophistication with the frequency list based on the native speaker corpora * Arabic-speaking learners are rated systematically lower across all levels of English proficiency than speakers of Chinese, Japanese, Korean, and Spanish. * When New General Service List(NGSL) is used on Pitt English Language Institute Corpus(PELIC), Level 5 Ar...") |
m |
||
(8 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
Naismith et al. (2018) | Naismith et al. (2018) [http://d-scholarship.pitt.edu/40665/1/EDM2018_paper_37.pdf pdf] | ||
* a model that measures L2 learners’ lexical sophistication with the frequency list based on the native speaker corpora | * a model that measures L2 learners’ lexical sophistication with the frequency list based on the native speaker corpora | ||
* Arabic-speaking learners are rated systematically lower across all levels of English proficiency than speakers of Chinese, Japanese, Korean, and Spanish. | * Arabic-speaking learners are rated systematically lower across all levels of English proficiency than speakers of Chinese, Japanese, Korean, and Spanish. | ||
* | * Level 5 Arabic-speaking learners are unfairly evaluated to have similar level of lexical sophistication as Level 4 learners from China, Japan, Korean and Spain . | ||
* When used on ETS corpus, “high”-labeled essays by Japanese-speaking learners are | * When used on ETS corpus, “high”-labeled essays by Japanese-speaking learners are rated significantly lower in lexical sophistication than Arabic, Japanese, Korean and Spanish peers. | ||
Samei et al. (2015) [https://files.eric.ed.gov/fulltext/ED560879.pdf pdf] | |||
* Models predicting classroom discourse properties (e.g. authenticity and uptake) | |||
* Model trained on urban students (authenticity: 0.62, uptake: 0.60) performed with similar accuracy when tested on non-urban students (authenticity: 0.62, uptake: 0.62) | |||
* Model trained on non-urban (authenticity: 0.61, uptake: 0.59) performed with similar accuracy when tested on urban students (authenticity: 0.60, uptake: 0.63) | |||
Sha et al. (2021) [https://angusglchen.github.io/files/AIED2021_Lele_Assessing.pdf pdf] | |||
* Models predicting a MOOC discussion forum post is content-relevant or content-irrelevant | |||
* MOOCs taught in English | |||
* Some algorithms achieved ABROCA under 0.01 for female students versus male students, but other algorithms (Naive Bayes) had ABROCA as high as 0.06 | |||
* ABROCA varied from 0.03 to 0.08 for non-native speakers of English versus native speakers | |||
* Balancing the size of each group in the training set reduced ABROCA values | |||
Sha et al. (2022) [https://ieeexplore.ieee.org/abstract/document/9849852] | |||
* Predicting forum post relevance to course in Moodle data (neural network) | |||
* A range of over-sampling methods tested | |||
* Regardless of over-sampling method used, forum post relevance performance was moderately better for females. | |||
Zhang et al.(2023) [https://learninganalytics.upenn.edu/ryanbaker/ISLS23_annotation%20detector_short_submit.pdf pdf] | |||
* Models developed to detect attributes of student feedback for other students’ mathematics solutions, reflecting the presence of three constructs:1) commenting on the process, 2) commenting on the answer, and 3) relating to self. | |||
* Models have approximately equal performance for males and females and for African American, Hispanic/Latinx, and White students. |
Latest revision as of 20:10, 28 June 2023
Naismith et al. (2018) pdf
- a model that measures L2 learners’ lexical sophistication with the frequency list based on the native speaker corpora
- Arabic-speaking learners are rated systematically lower across all levels of English proficiency than speakers of Chinese, Japanese, Korean, and Spanish.
- Level 5 Arabic-speaking learners are unfairly evaluated to have similar level of lexical sophistication as Level 4 learners from China, Japan, Korean and Spain .
- When used on ETS corpus, “high”-labeled essays by Japanese-speaking learners are rated significantly lower in lexical sophistication than Arabic, Japanese, Korean and Spanish peers.
Samei et al. (2015) pdf
- Models predicting classroom discourse properties (e.g. authenticity and uptake)
- Model trained on urban students (authenticity: 0.62, uptake: 0.60) performed with similar accuracy when tested on non-urban students (authenticity: 0.62, uptake: 0.62)
- Model trained on non-urban (authenticity: 0.61, uptake: 0.59) performed with similar accuracy when tested on urban students (authenticity: 0.60, uptake: 0.63)
Sha et al. (2021) pdf
- Models predicting a MOOC discussion forum post is content-relevant or content-irrelevant
- MOOCs taught in English
- Some algorithms achieved ABROCA under 0.01 for female students versus male students, but other algorithms (Naive Bayes) had ABROCA as high as 0.06
- ABROCA varied from 0.03 to 0.08 for non-native speakers of English versus native speakers
- Balancing the size of each group in the training set reduced ABROCA values
Sha et al. (2022) [1]
- Predicting forum post relevance to course in Moodle data (neural network)
- A range of over-sampling methods tested
- Regardless of over-sampling method used, forum post relevance performance was moderately better for females.
Zhang et al.(2023) pdf
- Models developed to detect attributes of student feedback for other students’ mathematics solutions, reflecting the presence of three constructs:1) commenting on the process, 2) commenting on the answer, and 3) relating to self.
- Models have approximately equal performance for males and females and for African American, Hispanic/Latinx, and White students.