@brusilovsky

A Human Perspective on Algorithmic Similarity

, , and . Fourteenth ACM Conference on Recommender Systems, ACM, (September 2020)
DOI: 10.1145/3383313.3411549

Abstract

In the Netflix user interface (UI), when a row or UI element is named “Because you Watched...”, “More Like This”, or “Because you added to your list”, the overarching goal is to recommend a movie or TV show that a member might like based on the fact that they took a meaningful action on a source item. We have employed similar recommendations in many UI elements: on the homepage as a row of recommendations, after you click into a title, or as a piece of information about why a member should watch a title. From an algorithmic perspective, there are many ways to define a “successful” similar recommendation. We sought to broaden the definition of success. To this end, the Consumer Insights team recently completed a suite of research projects to explore the intricacies of member perceptions of similar recommendations. The Netflix Consumer Insights team employs qualitative (e.g., in-depth interviews) and quantitative (e.g., surveys) research methods, interfacing directly with Netflix members to uncover pain points that can inspire new product innovation. The research concluded that, while the typical member believes movies are broadly similar when they share a common genre or theme, similarity is more complex, nuanced, and personal than we might have imagined. The vernacular we use in the UI implies that there should be at least some kind of relationship between the source item and the recommendations that follow. Many of our similar recommendations felt “out of place”, mostly because the relationship between the source item and the recommendation was unclear or absent. When similar recommendations tell a completely misleading, incorrect, or confusing story, member trust can be broken. We will structure the presentation around three new insights that our research found to have an influence on the perception of similarity in the context of Netflix as well as the research methods used to uncover those insights. First, the reason a member loves a given movie will vary. For example, do you want to watch other baseball movies like Field of Dreams, or would you prefer other romances like Field of Dreams? Second, members are more or less flexible about how similar a recommendation actually needs to be depending on the properties of and their interactions with the canvas containing the recommendation. For example, a Because You Watched row on the homepage implies vaguer similarity while a More Like This gallery behind a click into the source item implies stricter similarity. Finally, even when we held the UI element constant, we found that similar recommendations are only valuable in some contexts. After finishing a movie, a member might prefer a similar recommendation one day and a change of pace the next. Research methods discussed will include single-arrangement Inverse Multi-Dimensional Scaling 1, survey experimentation, and ways to apply qualitative research to improve algorithmic recommendations.

Description

A Human Perspective on Algorithmic Similarity | Fourteenth ACM Conference on Recommender Systems

Links and resources

Tags

community

  • @brusilovsky
  • @dblp
@brusilovsky's tags highlighted