Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin. (2019)cite arxiv:1905.05406Comment: Published in the International Conference on Machine Learning, 2019.
Abstract
Plug-and-play (PnP) is a non-convex framework that integrates modern
denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or
other proximal algorithms. An advantage of PnP is that one can use pre-trained
denoisers when there is not sufficient data for end-to-end training. Although
PnP has been recently studied extensively with great empirical success,
theoretical analysis addressing even the most basic question of convergence has
been insufficient. In this paper, we theoretically establish convergence of
PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain
Lipschitz condition on the denoisers. We then propose real spectral
normalization, a technique for training deep learning-based denoisers to
satisfy the proposed Lipschitz condition. Finally, we present experimental
results validating the theory.
Description
[1905.05406] Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
%0 Journal Article
%1 ryu2019plugandplay
%A Ryu, Ernest K.
%A Liu, Jialin
%A Wang, Sicheng
%A Chen, Xiaohan
%A Wang, Zhangyang
%A Yin, Wotao
%D 2019
%K deep-learning optimization sparsity
%T Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
%U http://arxiv.org/abs/1905.05406
%X Plug-and-play (PnP) is a non-convex framework that integrates modern
denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or
other proximal algorithms. An advantage of PnP is that one can use pre-trained
denoisers when there is not sufficient data for end-to-end training. Although
PnP has been recently studied extensively with great empirical success,
theoretical analysis addressing even the most basic question of convergence has
been insufficient. In this paper, we theoretically establish convergence of
PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain
Lipschitz condition on the denoisers. We then propose real spectral
normalization, a technique for training deep learning-based denoisers to
satisfy the proposed Lipschitz condition. Finally, we present experimental
results validating the theory.
@article{ryu2019plugandplay,
abstract = {Plug-and-play (PnP) is a non-convex framework that integrates modern
denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or
other proximal algorithms. An advantage of PnP is that one can use pre-trained
denoisers when there is not sufficient data for end-to-end training. Although
PnP has been recently studied extensively with great empirical success,
theoretical analysis addressing even the most basic question of convergence has
been insufficient. In this paper, we theoretically establish convergence of
PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain
Lipschitz condition on the denoisers. We then propose real spectral
normalization, a technique for training deep learning-based denoisers to
satisfy the proposed Lipschitz condition. Finally, we present experimental
results validating the theory.},
added-at = {2019-06-09T19:09:14.000+0200},
author = {Ryu, Ernest K. and Liu, Jialin and Wang, Sicheng and Chen, Xiaohan and Wang, Zhangyang and Yin, Wotao},
biburl = {https://www.bibsonomy.org/bibtex/29b45fc5e6d2f75c782273620027a00ea/kirk86},
description = {[1905.05406] Plug-and-Play Methods Provably Converge with Properly Trained Denoisers},
interhash = {03eadace170566431dc8a2b4d24d0f69},
intrahash = {9b45fc5e6d2f75c782273620027a00ea},
keywords = {deep-learning optimization sparsity},
note = {cite arxiv:1905.05406Comment: Published in the International Conference on Machine Learning, 2019},
timestamp = {2019-06-09T19:09:46.000+0200},
title = {Plug-and-Play Methods Provably Converge with Properly Trained Denoisers},
url = {http://arxiv.org/abs/1905.05406},
year = 2019
}