The existing evaluation measures for information retrieval algorithms still lack awareness about the user’s cognitive aspects and their dynamics. They often consider an isolated query-document environment and ignore the user’s previous knowledge and his/her motivation behind the query. The retrieval algorithms and evaluation measures that account for those factors limit the result’s relevance to one search session, one query, or one search goal. We present a novel evaluation measure that overcomes this limitation. The framework measures the relevance of a result/document by examining its content and assessing the possible learning outcomes, for a specific user. Hence not all documents are relevant to all users. The proposed evaluation measure rewards the results’ content for their novelty with respect to what the user already knows and what has been previously proposed. The results are also rewarded for their contribution to achieving the search goals/needs. We demonstrate the efficiency of the measure by comparing it to the knowledge gain reported by 361 crowd-sourced users searching the Web across 10 different topics.
Description
User’s Knowledge and Information Needs in Information Retrieval Evaluation | Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
%0 Conference Paper
%1 El_Zein_2022
%A Zein, Dima El
%A da Costa Pereira, Célia
%B Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
%D 2022
%I ACM
%K information-retrieval knowledge-modeling umap2022
%P 170-178
%R 10.1145/3503252.3531325
%T User's Knowledge and Information Needs in Information Retrieval Evaluation
%U https://doi.org/10.1145%2F3503252.3531325
%X The existing evaluation measures for information retrieval algorithms still lack awareness about the user’s cognitive aspects and their dynamics. They often consider an isolated query-document environment and ignore the user’s previous knowledge and his/her motivation behind the query. The retrieval algorithms and evaluation measures that account for those factors limit the result’s relevance to one search session, one query, or one search goal. We present a novel evaluation measure that overcomes this limitation. The framework measures the relevance of a result/document by examining its content and assessing the possible learning outcomes, for a specific user. Hence not all documents are relevant to all users. The proposed evaluation measure rewards the results’ content for their novelty with respect to what the user already knows and what has been previously proposed. The results are also rewarded for their contribution to achieving the search goals/needs. We demonstrate the efficiency of the measure by comparing it to the knowledge gain reported by 361 crowd-sourced users searching the Web across 10 different topics.
@inproceedings{El_Zein_2022,
abstract = {The existing evaluation measures for information retrieval algorithms still lack awareness about the user’s cognitive aspects and their dynamics. They often consider an isolated query-document environment and ignore the user’s previous knowledge and his/her motivation behind the query. The retrieval algorithms and evaluation measures that account for those factors limit the result’s relevance to one search session, one query, or one search goal. We present a novel evaluation measure that overcomes this limitation. The framework measures the relevance of a result/document by examining its content and assessing the possible learning outcomes, for a specific user. Hence not all documents are relevant to all users. The proposed evaluation measure rewards the results’ content for their novelty with respect to what the user already knows and what has been previously proposed. The results are also rewarded for their contribution to achieving the search goals/needs. We demonstrate the efficiency of the measure by comparing it to the knowledge gain reported by 361 crowd-sourced users searching the Web across 10 different topics.
},
added-at = {2022-07-25T04:31:16.000+0200},
author = {Zein, Dima El and da Costa Pereira, C{\'{e}}lia},
biburl = {https://www.bibsonomy.org/bibtex/2812e8255f553133deb5cafb2b800d73e/brusilovsky},
booktitle = {Proceedings of the 30th {ACM} Conference on User Modeling, Adaptation and Personalization},
description = {User’s Knowledge and Information Needs in Information Retrieval Evaluation | Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization},
doi = {10.1145/3503252.3531325},
interhash = {9e62a6511a8a36ade6cba5c048fe7a8e},
intrahash = {812e8255f553133deb5cafb2b800d73e},
keywords = {information-retrieval knowledge-modeling umap2022},
month = jul,
pages = {170-178},
publisher = {{ACM}},
timestamp = {2022-07-25T04:31:16.000+0200},
title = {User's Knowledge and Information Needs in Information Retrieval Evaluation},
url = {https://doi.org/10.1145%2F3503252.3531325},
year = 2022
}