Inproceedings,

User's Knowledge and Information Needs in Information Retrieval Evaluation

, and .
Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, page 170-178. ACM, (July 2022)
DOI: 10.1145/3503252.3531325

Abstract

The existing evaluation measures for information retrieval algorithms still lack awareness about the user’s cognitive aspects and their dynamics. They often consider an isolated query-document environment and ignore the user’s previous knowledge and his/her motivation behind the query. The retrieval algorithms and evaluation measures that account for those factors limit the result’s relevance to one search session, one query, or one search goal. We present a novel evaluation measure that overcomes this limitation. The framework measures the relevance of a result/document by examining its content and assessing the possible learning outcomes, for a specific user. Hence not all documents are relevant to all users. The proposed evaluation measure rewards the results’ content for their novelty with respect to what the user already knows and what has been previously proposed. The results are also rewarded for their contribution to achieving the search goals/needs. We demonstrate the efficiency of the measure by comparing it to the knowledge gain reported by 361 crowd-sourced users searching the Web across 10 different topics.

Tags

Users

  • @brusilovsky
  • @dblp

Comments and Reviews