Abstract
While Bayesian methods are extremely popular in statistics and machine
learning, their application to massive datasets is often challenging, when
possible at all. Indeed, the classical MCMC algorithms are prohibitively slow
when both the model dimension and the sample size are large. Variational
Bayesian methods aim at approximating the posterior by a distribution in a
tractable family. Thus, MCMC are replaced by an optimization algorithm which is
orders of magnitude faster. VB methods have been applied in such
computationally demanding applications as including collaborative filtering,
image and video processing, NLP and text processing... However, despite very
nice results in practice, the theoretical properties of these approximations
are usually not known. In this paper, we propose a general approach to prove
the concentration of variational approximations of fractional posteriors. We
apply our theory to two examples: matrix completion, and Gaussian VB.
Users
Please
log in to take part in the discussion (add own reviews or comments).