BibSonomy :: publication :: Reading Tea Leaves: How Humans Interpret Topic Models
URLDOITeX

Reading Tea Leaves: How Humans Interpret Topic Models

Jonathan Chang, Jordan L. Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. NIPS, page 288--296. Curran Associates, Inc., (2009)

Abstract

Probabilistic topic models are a popular tool for the unsupervised analysis of text, providing both a predictive model of future text and a latent topic representation of the corpus. Practitioners typically assume that the latent space is semantically meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need of quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics.

Links and resources

URL:http://books.nips.cc/papers/files/nips22/NIPS2009_0125.pdf
BibTeX key:chang2009reading
internal link:
?
You can use this internal link to create references to this post in your discussions. Just copy this internal link and paste it in your discussion text.
search on:

Comments or reviews  
(1)

@jaeschke has written a comment or review.  Join the discussion!

Tags

  • Last update 2 years and a month ago.
  • Created 2 years and 2 months ago.

Cite this publication