On the Global Convergence of Gradient Descent for Over-parameterized
Models using Optimal Transport
L. Chizat, and F. Bach. (2018)cite arxiv:1805.09545Comment: Advances in Neural Information Processing Systems (NIPS), Dec 2018, Montréal, Canada.
Abstract
Many tasks in machine learning and signal processing can be solved by
minimizing a convex function of a measure. This includes sparse spikes
deconvolution or training a neural network with a single hidden layer. For
these problems, we study a simple minimization method: the unknown measure is
discretized into a mixture of particles and a continuous-time gradient descent
is performed on their weights and positions. This is an idealization of the
usual way to train neural networks with a large hidden layer. We show that,
when initialized correctly and in the many-particle limit, this gradient flow,
although non-convex, converges to global minimizers. The proof involves
Wasserstein gradient flows, a by-product of optimal transport theory. Numerical
experiments show that this asymptotic behavior is already at play for a
reasonable number of particles, even in high dimension.
Description
[1805.09545] On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
%0 Journal Article
%1 chizat2018global
%A Chizat, Lenaic
%A Bach, Francis
%D 2018
%K convergence optimal-transport optimization readings
%T On the Global Convergence of Gradient Descent for Over-parameterized
Models using Optimal Transport
%U http://arxiv.org/abs/1805.09545
%X Many tasks in machine learning and signal processing can be solved by
minimizing a convex function of a measure. This includes sparse spikes
deconvolution or training a neural network with a single hidden layer. For
these problems, we study a simple minimization method: the unknown measure is
discretized into a mixture of particles and a continuous-time gradient descent
is performed on their weights and positions. This is an idealization of the
usual way to train neural networks with a large hidden layer. We show that,
when initialized correctly and in the many-particle limit, this gradient flow,
although non-convex, converges to global minimizers. The proof involves
Wasserstein gradient flows, a by-product of optimal transport theory. Numerical
experiments show that this asymptotic behavior is already at play for a
reasonable number of particles, even in high dimension.
@article{chizat2018global,
abstract = {Many tasks in machine learning and signal processing can be solved by
minimizing a convex function of a measure. This includes sparse spikes
deconvolution or training a neural network with a single hidden layer. For
these problems, we study a simple minimization method: the unknown measure is
discretized into a mixture of particles and a continuous-time gradient descent
is performed on their weights and positions. This is an idealization of the
usual way to train neural networks with a large hidden layer. We show that,
when initialized correctly and in the many-particle limit, this gradient flow,
although non-convex, converges to global minimizers. The proof involves
Wasserstein gradient flows, a by-product of optimal transport theory. Numerical
experiments show that this asymptotic behavior is already at play for a
reasonable number of particles, even in high dimension.},
added-at = {2019-06-10T21:24:19.000+0200},
author = {Chizat, Lenaic and Bach, Francis},
biburl = {https://www.bibsonomy.org/bibtex/2e456298cb107d5d3768522878325b425/kirk86},
description = {[1805.09545] On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport},
interhash = {9d55992bd267b986be6759f25d60fbe7},
intrahash = {e456298cb107d5d3768522878325b425},
keywords = {convergence optimal-transport optimization readings},
note = {cite arxiv:1805.09545Comment: Advances in Neural Information Processing Systems (NIPS), Dec 2018, Montr\'eal, Canada},
timestamp = {2019-09-26T15:22:02.000+0200},
title = {On the Global Convergence of Gradient Descent for Over-parameterized
Models using Optimal Transport},
url = {http://arxiv.org/abs/1805.09545},
year = 2018
}