Fairness and Transparency in Recommendation: The Users' Perspective
N. Sonboli, J. Smith, F. Berenfus, R. Burke, и C. Fiesler. Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, стр. 274-279. ACM, (июня 2021)How to explain recommendations that are generated from fairness prospect?.
DOI: 10.1145/3450613.3456835
Аннотация
Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems.
Описание
Fairness and Transparency in Recommendation: The Users’ Perspective | Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
%0 Conference Paper
%1 Sonboli_2021
%A Sonboli, Nasim
%A Smith, Jessie J.
%A Berenfus, Florencia Cabral
%A Burke, Robin
%A Fiesler, Casey
%B Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
%D 2021
%I ACM
%K explanation fairness recommender transparency umap2021
%P 274-279
%R 10.1145/3450613.3456835
%T Fairness and Transparency in Recommendation: The Users' Perspective
%U https://doi.org/10.1145%2F3450613.3456835
%X Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems.
@inproceedings{Sonboli_2021,
abstract = {Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems.
},
added-at = {2021-07-12T23:29:34.000+0200},
author = {Sonboli, Nasim and Smith, Jessie J. and Berenfus, Florencia Cabral and Burke, Robin and Fiesler, Casey},
biburl = {https://www.bibsonomy.org/bibtex/220c1ed8d104534cdafc5bfc18457cbf4/brusilovsky},
booktitle = {Proceedings of the 29th {ACM} Conference on User Modeling, Adaptation and Personalization},
description = {Fairness and Transparency in Recommendation: The Users’ Perspective | Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization},
doi = {10.1145/3450613.3456835},
interhash = {c9a367141703e1fb7fd8d7f3fc7dd366},
intrahash = {20c1ed8d104534cdafc5bfc18457cbf4},
keywords = {explanation fairness recommender transparency umap2021},
month = jun,
note = {How to explain recommendations that are generated from fairness prospect?},
pages = {274-279},
publisher = {{ACM}},
timestamp = {2023-06-05T00:24:03.000+0200},
title = {Fairness and Transparency in Recommendation: The Users' Perspective},
url = {https://doi.org/10.1145%2F3450613.3456835},
year = 2021
}