Abstract
Cognitive task analysis is a laborious process made more onerous in educational platforms where many problems are user created and mostly left without identified knowledge components. Past approaches to this issue of untagged problems have centered around text mining to impute knowledge components (KC). In this work, we advance KC imputation research by modeling both the content (text) of a problem as well as the context (problems around it) using a novel application of skip-gram based representation learning applied to tens of thousands of student response sequences from the ASSISTments 2012 public dataset. We find that there is as much information in the contextual representation as the content representation, with the combination of sources of information leading to a 90\% accuracy in predicting the missing skill from a KC model of 198. This work underscores the value of considering problems in context for the KC prediction task and has broad implications for its use with other modeling objectives such as KC model improvement.
Users
Please
log in to take part in the discussion (add own reviews or comments).