Personalized Top-N Sequential Recommendation via Convolutional Sequence
Embedding
J. Tang, and K. Wang. (2018)cite arxiv:1809.07426Comment: Accepted at WSDM 2018.
Abstract
Top-$N$ sequential recommendation models each user as a sequence of items
interacted in the past and aims to predict top-$N$ ranked items that a user
will likely interact in a `near future'. The order of interaction implies that
sequential patterns play an important role where more recent items in a
sequence have a larger impact on the next item. In this paper, we propose a
Convolutional Sequence Embedding Recommendation Model (Caser) as a
solution to address this requirement. The idea is to embed a sequence of recent
items into an `image' in the time and latent spaces and learn sequential
patterns as local features of the image using convolutional filters. This
approach provides a unified and flexible network structure for capturing both
general preferences and sequential patterns. The experiments on public datasets
demonstrated that Caser consistently outperforms state-of-the-art sequential
recommendation methods on a variety of common evaluation metrics.
Description
[1809.07426] Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding
%0 Generic
%1 tang2018personalized
%A Tang, Jiaxi
%A Wang, Ke
%D 2018
%K caser diss preprocessing recommendation session
%T Personalized Top-N Sequential Recommendation via Convolutional Sequence
Embedding
%U http://arxiv.org/abs/1809.07426
%X Top-$N$ sequential recommendation models each user as a sequence of items
interacted in the past and aims to predict top-$N$ ranked items that a user
will likely interact in a `near future'. The order of interaction implies that
sequential patterns play an important role where more recent items in a
sequence have a larger impact on the next item. In this paper, we propose a
Convolutional Sequence Embedding Recommendation Model (Caser) as a
solution to address this requirement. The idea is to embed a sequence of recent
items into an `image' in the time and latent spaces and learn sequential
patterns as local features of the image using convolutional filters. This
approach provides a unified and flexible network structure for capturing both
general preferences and sequential patterns. The experiments on public datasets
demonstrated that Caser consistently outperforms state-of-the-art sequential
recommendation methods on a variety of common evaluation metrics.
@misc{tang2018personalized,
abstract = {Top-$N$ sequential recommendation models each user as a sequence of items
interacted in the past and aims to predict top-$N$ ranked items that a user
will likely interact in a `near future'. The order of interaction implies that
sequential patterns play an important role where more recent items in a
sequence have a larger impact on the next item. In this paper, we propose a
Convolutional Sequence Embedding Recommendation Model (\emph{Caser}) as a
solution to address this requirement. The idea is to embed a sequence of recent
items into an `image' in the time and latent spaces and learn sequential
patterns as local features of the image using convolutional filters. This
approach provides a unified and flexible network structure for capturing both
general preferences and sequential patterns. The experiments on public datasets
demonstrated that Caser consistently outperforms state-of-the-art sequential
recommendation methods on a variety of common evaluation metrics.},
added-at = {2023-02-07T14:24:42.000+0100},
author = {Tang, Jiaxi and Wang, Ke},
biburl = {https://www.bibsonomy.org/bibtex/28f0b1f943bf5c1c7804a51c88c141b92/e.fischer},
description = {[1809.07426] Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding},
interhash = {4e4193027b9875bf84b114b74447e674},
intrahash = {8f0b1f943bf5c1c7804a51c88c141b92},
keywords = {caser diss preprocessing recommendation session},
note = {cite arxiv:1809.07426Comment: Accepted at WSDM 2018},
timestamp = {2023-02-07T14:24:42.000+0100},
title = {Personalized Top-N Sequential Recommendation via Convolutional Sequence
Embedding},
url = {http://arxiv.org/abs/1809.07426},
year = 2018
}