S. Li, A. Karatzoglou, and C. Gentile. Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, page 539--548. New York, NY, USA, ACM, (2016)
DOI: 10.1145/2911451.2911548
Abstract
Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings. Our algorithm takes into account the collaborative effects that arise due to the interaction of the users with the items, by dynamically grouping users based on the items under consideration and, at the same time, grouping items based on the similarity of the clusterings induced over the users. The resulting algorithm thus takes advantage of preference patterns in the data in a way akin to collaborative filtering methods. We provide an empirical analysis on medium-size real-world datasets, showing scalability and increased prediction performance (as measured by click-through rate) over state-of-the-art methods for clustering bandits. We also provide a regret analysis within a standard linear stochastic noise setting.
%0 Conference Paper
%1 Li:2016:CFB:2911451.2911548
%A Li, Shuai
%A Karatzoglou, Alexandros
%A Gentile, Claudio
%B Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval
%C New York, NY, USA
%D 2016
%I ACM
%K cf clustering recommender toread
%P 539--548
%R 10.1145/2911451.2911548
%T Collaborative Filtering Bandits
%U http://doi.acm.org/10.1145/2911451.2911548
%X Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings. Our algorithm takes into account the collaborative effects that arise due to the interaction of the users with the items, by dynamically grouping users based on the items under consideration and, at the same time, grouping items based on the similarity of the clusterings induced over the users. The resulting algorithm thus takes advantage of preference patterns in the data in a way akin to collaborative filtering methods. We provide an empirical analysis on medium-size real-world datasets, showing scalability and increased prediction performance (as measured by click-through rate) over state-of-the-art methods for clustering bandits. We also provide a regret analysis within a standard linear stochastic noise setting.
%@ 978-1-4503-4069-4
@inproceedings{Li:2016:CFB:2911451.2911548,
abstract = {Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings. Our algorithm takes into account the collaborative effects that arise due to the interaction of the users with the items, by dynamically grouping users based on the items under consideration and, at the same time, grouping items based on the similarity of the clusterings induced over the users. The resulting algorithm thus takes advantage of preference patterns in the data in a way akin to collaborative filtering methods. We provide an empirical analysis on medium-size real-world datasets, showing scalability and increased prediction performance (as measured by click-through rate) over state-of-the-art methods for clustering bandits. We also provide a regret analysis within a standard linear stochastic noise setting.},
acmid = {2911548},
added-at = {2016-12-02T10:22:31.000+0100},
address = {New York, NY, USA},
author = {Li, Shuai and Karatzoglou, Alexandros and Gentile, Claudio},
biburl = {https://www.bibsonomy.org/bibtex/2f9d4edc77bc91c2bc38654973c136990/hotho},
booktitle = {Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval},
description = {Collaborative Filtering Bandits},
doi = {10.1145/2911451.2911548},
interhash = {9c077fdfee057c587562bf016767397d},
intrahash = {f9d4edc77bc91c2bc38654973c136990},
isbn = {978-1-4503-4069-4},
keywords = {cf clustering recommender toread},
location = {Pisa, Italy},
numpages = {10},
pages = {539--548},
publisher = {ACM},
series = {SIGIR '16},
timestamp = {2016-12-02T10:23:40.000+0100},
title = {Collaborative Filtering Bandits},
url = {http://doi.acm.org/10.1145/2911451.2911548},
year = 2016
}