Article,

Crowdsourcing for relevance evaluation

, , and .
SIGIR Forum, 42 (2): 9--15 (November 2008)
DOI: 10.1145/1480506.1480508

Abstract

Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.

Tags

Users

  • @jaeschke

Comments and Reviews