Inproceedings,

On Smoothing and Inference for Topic Models.

, , , and .
UAI, page 27-34. AUAI Press, (2009)

Meta data

Tags

Users

  • @dblp
  • @folke

Comments and Reviewsshow / hide

  • @ckling
    6 years ago
    Well-written, clear paper showing the similarities of EM/Gibbs/VB. The paper contains one strange mistake: The MAP estimate of multinomials with sparse Dirichlet priors does not yield "negative probabilities" (did they read Feynman?). Uh oh. The second derivative is positive for those values -> minimum, not maximum. Instead, the maximum is in {0,1}, which is easy to derive. This is well known for the Arcsine distribution (a special case of the Beta/Dirichlet). However, the paper still is awesome, and the mistake does not affect the message of the paper - with zero probabilities the perplexity should be really bad anyway :) Btw. this mistake made it into the Spark ML library.
Please log in to take part in the discussion (add own reviews or comments).