Abstract

In contrast to traditional information retrieval systems, which return ranked lists of documents that users must manually browse through, a question answering system attempts to directly answer natural language questions posed by the user. Although such systems possess language processing capabilities, they still rely on traditional document retrieval techniques to generate an initial candidate set of documents. In this paper, we argue that document retrieval for question answering represents a different task than retrieving documents in response to more general retrospective information needs. Thus, to guide future system development, specialized question answering test collections must be constructed. We have shown that the current evaluation resources have major shortcomings, and to remedy the situation, we have manually created a small, reusable question answering test collection for research purposes. This article describes our methodology for building this test collection and discusses issues we encountered along the way regarding the notion of ?answer correctness?.

Links and resources

Tags

community

  • @diego_ma
  • @defeatnelly
  • @dblp
@diego_ma's tags highlighted