@brusilovsky

Formative Feedback on Student-Authored Summaries in Intelligent Textbooks Using Large Language Models

, , , , , and . International Journal of Artificial Intelligence in Education, (Mar 28, 2024)
DOI: 10.1007/s40593-024-00395-0

Abstract

As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need to make them more interactive arises. An alternative is to ask students to generate knowledge in response to textbook content and provide feedback about the produced knowledge. This study develops Natural Language Processing models to automatically provide feedback to students about the quality of summaries written at the end of intelligent textbook sections. The study builds on the work of Botarleanu et al. (2022), who used a Longformer Large Language Model (LLM) to develop a summary grading model. Their model explained around 55\% of holistic summary score variance as assigned by human raters. This study uses a principal component analysis to distill summary scores from an analytic rubric into two principal components -- content and wording. This study uses two encoder-only classification large language models finetuned from Longformer on the summaries and the source texts using these principal components explained 82\% and 70\% of the score variance for content and wording, respectively. On a dataset of summaries collected on the crowd-sourcing site Prolific, the content model was shown to be robust although the accuracy of the wording model was reduced compared to the training set. The developed models are freely available on HuggingFace and will allow formative feedback to users of intelligent textbooks to assess reading comprehension through summarization in real time. The models can also be used for other summarization applications in learning systems.

Description

Formative Feedback on Student-Authored Summaries in Intelligent Textbooks Using Large Language Models | International Journal of Artificial Intelligence in Education

Links and resources

Tags