@brusilovsky

Evaluation of a Hybrid AI-Human Recommender for CS1 Instructors in a Real Educational Scenario

, , , , , , , , , and . Responsive and Sustainable Educational Futures, page 308--323. Cham, Springer Nature Switzerland, (2023)

Abstract

Automatic code graders, also called Programming Online Judges (OJ), can support students and instructors in introduction to programming courses (CS1). Using OJs in CS1, instructors select problems to compose assignment lists, whereas students submit their code solutions and receive instantaneous feedback. Whilst this process reduces the instructors' workload in evaluating students' code, selecting problems to compose assignments is arduous. Recently, recommender systems have been proposed by the literature to support OJ users. Nonetheless, there is a lack of recommenders fitted for CS1 courses and the ones found in the literature have not been assessed by the target users in a real educational scenario. It is worth noting that hybrid human/AI systems are claimed to potentially surpass isolated human or AI. In this study, we adapted and evaluated a state-of-the-art hybrid human/AI recommender to support CS1 instructors in selecting problems to compose variations of CS1 assignments. We compared data-driven measures (e.g., time students take to solve problems, number of logical lines of code, and hit rate) extracted from student logs whilst solving programming problems from assignments created by instructors versus when solving assignments in collaboration with an adaptation of cutting-edge hybrid/AI method. As a result, employing a data analysis comparing experimental and control conditions using multi-level regressions, we observed that the recommender provided problems compatible with human-selected in all data-driven measures tested.

Description

Evaluation of a Hybrid AI-Human Recommender for CS1 Instructors in a Real Educational Scenario | SpringerLink

Links and resources

Tags

community

  • @brusilovsky
  • @dblp
@brusilovsky's tags highlighted