Class vs. Student in a Bayesian Network Student Model
Y. Wang, and J. Beck. Artificial Intelligence in Education, volume 7926 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, (2013)
DOI: 10.1007/978-3-642-39112-5_16
Abstract
For decades, intelligent tutoring systems researchers have been developing various methods of student modeling. Most of the models, including two of the most popular approaches: Knowledge Tracing model and Performance Factor Analysis, all have similar assumption: the information needed to model the student is the student's performance. However, there are other sources of information that are not utilized, such as the performance on other students in same class. This paper extends the Student-Skill extension of Knowledge Tracing, to take into account the class information, and learns four parameters: prior knowledge, learn, guess and slip for each class of students enrolled in the system. The paper then compares the accuracy using the four parameters for each class versus the four parameters for each student to find out which parameter set works better in predicting student performance. The result shows that modeling at coarser grain sizes can actually result in higher predictive accuracy, and data about classmates' performance is results in a higher predictive accuracy on unseen test data.
(private-note)Various combination of class vs individual student to fit classic BKT parameters like K0, L, and GS.
The best one has all on the class level except learning rate L. But the one where everything is on the class level is almost as good.
Probably insufficient volume of personal data? Get comparable students?
%0 Book Section
%1 citeulike:12473574
%A Wang, Yutao
%A Beck, Joseph
%B Artificial Intelligence in Education
%D 2013
%E Lane,
%E Yacef, Kalina
%E Mostow, Jack
%E Pavlik, Philip
%I Springer Berlin Heidelberg
%K group-model, student-model
%P 151--160
%R 10.1007/978-3-642-39112-5_16
%T Class vs. Student in a Bayesian Network Student Model
%U http://dx.doi.org/10.1007/978-3-642-39112-5_16
%V 7926
%X For decades, intelligent tutoring systems researchers have been developing various methods of student modeling. Most of the models, including two of the most popular approaches: Knowledge Tracing model and Performance Factor Analysis, all have similar assumption: the information needed to model the student is the student's performance. However, there are other sources of information that are not utilized, such as the performance on other students in same class. This paper extends the Student-Skill extension of Knowledge Tracing, to take into account the class information, and learns four parameters: prior knowledge, learn, guess and slip for each class of students enrolled in the system. The paper then compares the accuracy using the four parameters for each class versus the four parameters for each student to find out which parameter set works better in predicting student performance. The result shows that modeling at coarser grain sizes can actually result in higher predictive accuracy, and data about classmates' performance is results in a higher predictive accuracy on unseen test data.
@incollection{citeulike:12473574,
abstract = {{For decades, intelligent tutoring systems researchers have been developing various methods of student modeling. Most of the models, including two of the most popular approaches: Knowledge Tracing model and Performance Factor Analysis, all have similar assumption: the information needed to model the student is the student's performance. However, there are other sources of information that are not utilized, such as the performance on other students in same class. This paper extends the Student-Skill extension of Knowledge Tracing, to take into account the class information, and learns four parameters: prior knowledge, learn, guess and slip for each class of students enrolled in the system. The paper then compares the accuracy using the four parameters for each class versus the four parameters for each student to find out which parameter set works better in predicting student performance. The result shows that modeling at coarser grain sizes can actually result in higher predictive accuracy, and data about classmates' performance is results in a higher predictive accuracy on unseen test data.}},
added-at = {2017-11-15T17:02:25.000+0100},
author = {Wang, Yutao and Beck, Joseph},
biburl = {https://www.bibsonomy.org/bibtex/22b005a71f2697e0be14286a6c7dd600d/brusilovsky},
booktitle = {Artificial Intelligence in Education},
citeulike-article-id = {12473574},
citeulike-linkout-0 = {http://dx.doi.org/10.1007/978-3-642-39112-5_16},
citeulike-linkout-1 = {http://link.springer.com/chapter/10.1007/978-3-642-39112-5_16},
comment = {(private-note)Various combination of class vs individual student to fit classic BKT parameters like K0, L, and GS.
The best one has all on the class level except learning rate L. But the one where everything is on the class level is almost as good.
Probably insufficient volume of personal data? Get comparable students?},
doi = {10.1007/978-3-642-39112-5_16},
editor = {Lane and Yacef, Kalina and Mostow, Jack and Pavlik, Philip},
interhash = {466a048603e2c707e6a094345c9e32f7},
intrahash = {2b005a71f2697e0be14286a6c7dd600d},
keywords = {group-model, student-model},
pages = {151--160},
posted-at = {2013-07-10 17:24:56},
priority = {2},
publisher = {Springer Berlin Heidelberg},
series = {Lecture Notes in Computer Science},
timestamp = {2017-11-15T17:02:25.000+0100},
title = {{Class vs. Student in a Bayesian Network Student Model}},
url = {http://dx.doi.org/10.1007/978-3-642-39112-5_16},
volume = 7926,
year = 2013
}