D. Lee, and H. Seung. In NIPS, page 556--562. MIT Press, (2000)
Abstract
Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence. Introduction Unsupervised learning algorithms such as principal components analysis and vector quantization can be understood as factorizing a data matrix subject to different constraints. Depending upon the constraints utilized, the resulting factors can be shown ...
%0 Conference Paper
%1 Lee00algorithmsfor
%A Lee, Daniel D.
%A Seung, H. Sebastian
%B In NIPS
%D 2000
%I MIT Press
%K NMF optimization
%P 556--562
%T Algorithms for Non-negative Matrix Factorization
%U http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.7566
%X Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence. Introduction Unsupervised learning algorithms such as principal components analysis and vector quantization can be understood as factorizing a data matrix subject to different constraints. Depending upon the constraints utilized, the resulting factors can be shown ...
@inproceedings{Lee00algorithmsfor,
abstract = {Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence. Introduction Unsupervised learning algorithms such as principal components analysis and vector quantization can be understood as factorizing a data matrix subject to different constraints. Depending upon the constraints utilized, the resulting factors can be shown ...},
added-at = {2018-04-02T05:59:19.000+0200},
author = {Lee, Daniel D. and Seung, H. Sebastian},
biburl = {https://www.bibsonomy.org/bibtex/239d254a550b576bdd012e64207ec4540/shabbychef},
booktitle = {In NIPS},
description = {Algorithms for Non-negative Matrix Factorization},
interhash = {cf8707cab8812be3c21d3e5c10fad477},
intrahash = {39d254a550b576bdd012e64207ec4540},
keywords = {NMF optimization},
pages = {556--562},
publisher = {MIT Press},
timestamp = {2018-04-02T05:59:19.000+0200},
title = {Algorithms for Non-negative Matrix Factorization},
url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.7566},
year = 2000
}