Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor
M. Caughey, and K. Muldner. Artificial Intelligence in Education, page 66--77. Cham, Springer Nature Switzerland, (2023)
Abstract
Code tracing is a foundational programming skill that involves simulating a program's execution line by line, tracking how variables change at each step. To code trace, students need to understand what a given program line means, which can be accomplished by translating it into plain English. Translation can be characterized as a form of self-explanation, a general learning mechanism that involves making inferences beyond the instructional materials. Our work investigates if this form of self-explanation improves learning from a code-tracing tutor we created using the CTAT framework. We created two versions of the tutor. In the experimental version, students were asked to translate lines of code while solving code-tracing problems. In the control condition students were only asked to code trace without translating. The two tutor versions were compared using a between-subjects study (N = 44). The experimental group performed significantly better on translation and code-generation questions, but the control group performed significantly better on code-tracing questions. We discuss the implications of this finding for the design of tutors providing code-tracing support.
Description
Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor | SpringerLink
%0 Conference Paper
%1 10.1007/978-3-031-36272-9_6
%A Caughey, Maia
%A Muldner, Kasia
%B Artificial Intelligence in Education
%C Cham
%D 2023
%E Wang, Ning
%E Rebolledo-Mendez, Genaro
%E Matsuda, Noboru
%E Santos, Olga C.
%E Dimitrova, Vania
%I Springer Nature Switzerland
%K AIED2023 code-tracing intelligent-tutoring progtutor self-explanation
%P 66--77
%T Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor
%X Code tracing is a foundational programming skill that involves simulating a program's execution line by line, tracking how variables change at each step. To code trace, students need to understand what a given program line means, which can be accomplished by translating it into plain English. Translation can be characterized as a form of self-explanation, a general learning mechanism that involves making inferences beyond the instructional materials. Our work investigates if this form of self-explanation improves learning from a code-tracing tutor we created using the CTAT framework. We created two versions of the tutor. In the experimental version, students were asked to translate lines of code while solving code-tracing problems. In the control condition students were only asked to code trace without translating. The two tutor versions were compared using a between-subjects study (N = 44). The experimental group performed significantly better on translation and code-generation questions, but the control group performed significantly better on code-tracing questions. We discuss the implications of this finding for the design of tutors providing code-tracing support.
%@ 978-3-031-36272-9
@inproceedings{10.1007/978-3-031-36272-9_6,
abstract = {Code tracing is a foundational programming skill that involves simulating a program's execution line by line, tracking how variables change at each step. To code trace, students need to understand what a given program line means, which can be accomplished by translating it into plain English. Translation can be characterized as a form of self-explanation, a general learning mechanism that involves making inferences beyond the instructional materials. Our work investigates if this form of self-explanation improves learning from a code-tracing tutor we created using the CTAT framework. We created two versions of the tutor. In the experimental version, students were asked to translate lines of code while solving code-tracing problems. In the control condition students were only asked to code trace without translating. The two tutor versions were compared using a between-subjects study (N = 44). The experimental group performed significantly better on translation and code-generation questions, but the control group performed significantly better on code-tracing questions. We discuss the implications of this finding for the design of tutors providing code-tracing support.},
added-at = {2023-11-28T06:04:17.000+0100},
address = {Cham},
author = {Caughey, Maia and Muldner, Kasia},
biburl = {https://www.bibsonomy.org/bibtex/283a46c85a3174916d9eadbf045ab2fd6/brusilovsky},
booktitle = {Artificial Intelligence in Education},
description = {Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor | SpringerLink},
editor = {Wang, Ning and Rebolledo-Mendez, Genaro and Matsuda, Noboru and Santos, Olga C. and Dimitrova, Vania},
interhash = {4dae19aa102d24376303011638b8d526},
intrahash = {83a46c85a3174916d9eadbf045ab2fd6},
isbn = {978-3-031-36272-9},
keywords = {AIED2023 code-tracing intelligent-tutoring progtutor self-explanation},
pages = {66--77},
publisher = {Springer Nature Switzerland},
timestamp = {2023-11-28T06:04:17.000+0100},
title = {Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor},
year = 2023
}