@flint63

Questionnaires for Eliciting Evaluation Data from Users of Interactive Question Answering Systems

, , , , and . Natural Language Engineering, 15 (1): 119-141 (2009)
DOI: 10.1017/S1351324908004932

Abstract

Evaluating interactive question answering (QA) systems with real users can be challenging because traditional evaluation measures based on the relevance of items returned are difficult to employ since relevance judgments can be unstable in multi-user evaluations. The work reported in this paper evaluates, in distinguishing among a set of interactive QA systems, the effectiveness of three questionnaires: a Cognitive Workload Questionnaire (NASA TLX), and Task and System Questionnaires customized to a specific interactive QA application. These Questionnaires were evaluated with four systems, seven analysts, and eight scenarios during a 2-week workshop. Overall, results demonstrate that all three Questionnaires are effective at distinguishing among systems, with the Task Questionnaire being the most sensitive. Results also provide initial support for the validity and reliability of the Questionnaires.

Links and resources

Tags

community

  • @flint63
  • @dblp
@flint63's tags highlighted