Аннотация

Instructors routinely use automated assessment methods to evaluate the semantic qualities of student implementations and, sometimes, test suites. In this work, we distill a variety of automated assessment methods in the literature down to a pair of assessment models. We identify pathological assessment outcomes in each model that point to underlying methodological flaws. These theoretical flaws broadly threaten the validity of the techniques, and we actually observe them in multiple assignments of an introductory programming course. We propose adjustments that remedy these flaws and then demonstrate, on these same assignments, that our interventions improve the accuracy of assessment. We believe that with these adjustments, instructors can greatly improve the accuracy of automated assessment.

Описание

Interesting discussion on how to ensure that test suite is correct and how to test test suites in courses where test suite is a part of the assigment

Линки и ресурсы

тэги

сообщество

  • @brusilovsky
  • @dblp
@brusilovsky- тэги данного пользователя выделены