Misc,

Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

, and .
(2012)cite arxiv:1205.0953Comment: 43 pages, 7 figures; extends NIPS 2011 'Sparse recovery by thresholded non-negative least squares'.

Abstract

Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization. For a broad range of designs with Gram matrix having non-negative entries, we establish bounds on the $\ell_2$-prediction error of non-negative least squares (NNLS) whose form qualitatively matches corresponding results for $\ell_1$-regularization. Under slightly stronger conditions, it is established that NNLS followed by hard thresholding performs excellently in terms of support recovery of an (approximately) sparse target, in some cases improving over $\ell_1$-regularization. A substantial advantage of NNLS over regularization-based approaches is the absence of tuning parameters, which is convenient from a computational as well as from a practitioner's point of view. Deconvolution of positive spike trains is presented as application.

Tags

Users

  • @peter.ralph

Comments and Reviews