@brusilovsky

Exploring the Potential of Large Language Models to Generate Formative Programming Feedback

, , and . (2023)cite arxiv:2309.00029Comment: Accepted to FIE 2023.

Abstract

Ever since the emergence of large language models (LLMs) and related applications, such as ChatGPT, its performance and error analysis for programming tasks have been subject to research. In this work-in-progress paper, we explore the potential of such LLMs for computing educators and learners, as we analyze the feedback it generates to a given input containing program code. In particular, we aim at (1) exploring how an LLM like ChatGPT responds to students seeking help with their introductory programming tasks, and (2) identifying feedback types in its responses. To achieve these goals, we used students' programming sequences from a dataset gathered within a CS1 course as input for ChatGPT along with questions required to elicit feedback and correct solutions. The results show that ChatGPT performs reasonably well for some of the introductory programming tasks and student errors, which means that students can potentially benefit. However, educators should provide guidance on how to use the provided feedback, as it can contain misleading information for novices.

Description

[2309.00029] Exploring the Potential of Large Language Models to Generate Formative Programming Feedback

Links and resources

Tags

community

  • @brusilovsky
  • @dblp
@brusilovsky's tags highlighted