@folke

Evaluating evaluation measure stability

, and . Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, page 33--40. New York, NY, USA, ACM, (2000)
DOI: 10.1145/345508.345543

Abstract

This paper presents a novel way of examining the accuracy of the evaluation measures commonly used in information retrieval experiments. It validates several of the rules-of-thumb experimenters use, such as the number of queries needed for a good experiment is at least 25 and 50 is better, while challenging other beliefs, such as the common evaluation measures are equally reliable. As an example, we show that Precision at 30 documents has about twice the average error rate as Average Precision has. These results can help information retrieval researchers design experiments that provide a desired level of confidence in their results. In particular, we suggest researchers using Web measures such as Precision at 10 documents will need to use many more than 50 queries or will have to require two methods to have a very large difference in evaluation scores before concluding that the two methods are actually different.

Description

Evaluating evaluation measure stability

Links and resources

Tags

community

  • @hotho
  • @grahl
  • @dblp
  • @folke
@folke's tags highlighted