Inproceedings,

Validity and Reliability of Student Models for Problem-Solving Activities

, and .
page 1-11. ACM, (April 2021)
DOI: 10.1145/3448139.3448140

Abstract

Student models are typically evaluated through predicting the correctness of the next answer. This approach is insufficient in the problem-solving context, especially for student models that use performance data beyond binary correctness. We propose more comprehensive methods for validating student models and illustrate them in the context of introductory programming. We demonstrate the insufficiency of the next answer correctness prediction task, as it is neither able to reveal low validity of student models that use just binary correctness, nor does it show increased validity of models that use other performance data. The key message is that the prevalent usage of the next answer correctness for validating student models and binary correctness as the only input to the models is not always warranted and limits the progress in learning analytics.

Tags

Users

  • @brusilovsky

Comments and Reviews