Abstract
Multi-layer neural networks are among the most powerful models in machine
learning, yet the fundamental reasons for this success defy mathematical
understanding. Learning a neural network requires to optimize a non-convex
high-dimensional objective (risk function), a problem which is usually attacked
using stochastic gradient descent (SGD). Does SGD converge to a global optimum
of the risk or only to a local optimum? In the first case, does this happen
because local minima are absent, or because SGD somehow avoids them? In the
second, why do local minima reached by SGD have good generalization properties?
In this paper we consider a simple case, namely two-layers neural networks,
and prove that -in a suitable scaling limit- SGD dynamics is captured by a
certain non-linear partial differential equation (PDE) that we call
distributional dynamics (DD). We then consider several specific examples, and
show how DD can be used to prove convergence of SGD to networks with nearly
ideal generalization error. This description allows to 'average-out' some of
the complexities of the landscape of neural networks, and can be used to prove
a general convergence result for noisy SGD.
Users
Please
log in to take part in the discussion (add own reviews or comments).