Abstract
The large number of experiments carried out within evaluation initiatives for information retrieval has led to an invaluable source for further research and meta-analysis. In this study, an analysis of the results of the Cross Language Evaluation Forum (CLEF) campaigns for the years 2000 to 2003 is presented. This study considers the performance of the systems for each individual topic. It is dedicated to the influence of named entities on retrieval performance. Named entities in topics lead to significant improvement of the retrieval quality in general and for most systems and tasks. The performance of systems varies for topics without, with one or two and with three or more named entities. This knowledge gained by data mining on the evaluation results can be exploited for the improvement of retrieval systems as well as for the design of topics for future CLEF campaigns.
Users
Please
log in to take part in the discussion (add own reviews or comments).