We study the properties of common loss surfaces through their Hessian matrix.
In particular, in the context of deep learning, we empirically show that the
spectrum of the Hessian is composed of two parts: (1) the bulk centered near
zero, (2) and outliers away from the bulk. We present numerical evidence and
mathematical justifications to the following conjectures laid out by Sagun et
al. (2016): Fixing data, increasing the number of parameters merely scales the
bulk of the spectrum; fixing the dimension and changing the data (for instance
adding more clusters or making the data less separable) only affects the
outliers. We believe that our observations have striking implications for
non-convex optimization in high dimensions. First, the flatness of such
landscapes (which can be measured by the singularity of the Hessian) implies
that classical notions of basins of attraction may be quite misleading. And
that the discussion of wide/narrow basins may be in need of a new perspective
around over-parametrization and redundancy that are able to create large
connected components at the bottom of the landscape. Second, the dependence of
small number of large eigenvalues to the data distribution can be linked to the
spectrum of the covariance matrix of gradients of model outputs. With this in
mind, we may reevaluate the connections within the data-architecture-algorithm
framework of a model, hoping that it would shed light into the geometry of
high-dimensional and non-convex spaces in modern applications. In particular,
we present a case that links the two observations: small and large batch
gradient descent appear to converge to different basins of attraction but we
show that they are in fact connected through their flat region and so belong to
the same basin.
Описание
[1706.04454] Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
%0 Journal Article
%1 sagun2017empirical
%A Sagun, Levent
%A Evci, Utku
%A Guney, V. Ugur
%A Dauphin, Yann
%A Bottou, Leon
%D 2017
%K dynamic non-linear optimization readings
%T Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
%U http://arxiv.org/abs/1706.04454
%X We study the properties of common loss surfaces through their Hessian matrix.
In particular, in the context of deep learning, we empirically show that the
spectrum of the Hessian is composed of two parts: (1) the bulk centered near
zero, (2) and outliers away from the bulk. We present numerical evidence and
mathematical justifications to the following conjectures laid out by Sagun et
al. (2016): Fixing data, increasing the number of parameters merely scales the
bulk of the spectrum; fixing the dimension and changing the data (for instance
adding more clusters or making the data less separable) only affects the
outliers. We believe that our observations have striking implications for
non-convex optimization in high dimensions. First, the flatness of such
landscapes (which can be measured by the singularity of the Hessian) implies
that classical notions of basins of attraction may be quite misleading. And
that the discussion of wide/narrow basins may be in need of a new perspective
around over-parametrization and redundancy that are able to create large
connected components at the bottom of the landscape. Second, the dependence of
small number of large eigenvalues to the data distribution can be linked to the
spectrum of the covariance matrix of gradients of model outputs. With this in
mind, we may reevaluate the connections within the data-architecture-algorithm
framework of a model, hoping that it would shed light into the geometry of
high-dimensional and non-convex spaces in modern applications. In particular,
we present a case that links the two observations: small and large batch
gradient descent appear to converge to different basins of attraction but we
show that they are in fact connected through their flat region and so belong to
the same basin.
@article{sagun2017empirical,
abstract = {We study the properties of common loss surfaces through their Hessian matrix.
In particular, in the context of deep learning, we empirically show that the
spectrum of the Hessian is composed of two parts: (1) the bulk centered near
zero, (2) and outliers away from the bulk. We present numerical evidence and
mathematical justifications to the following conjectures laid out by Sagun et
al. (2016): Fixing data, increasing the number of parameters merely scales the
bulk of the spectrum; fixing the dimension and changing the data (for instance
adding more clusters or making the data less separable) only affects the
outliers. We believe that our observations have striking implications for
non-convex optimization in high dimensions. First, the flatness of such
landscapes (which can be measured by the singularity of the Hessian) implies
that classical notions of basins of attraction may be quite misleading. And
that the discussion of wide/narrow basins may be in need of a new perspective
around over-parametrization and redundancy that are able to create large
connected components at the bottom of the landscape. Second, the dependence of
small number of large eigenvalues to the data distribution can be linked to the
spectrum of the covariance matrix of gradients of model outputs. With this in
mind, we may reevaluate the connections within the data-architecture-algorithm
framework of a model, hoping that it would shed light into the geometry of
high-dimensional and non-convex spaces in modern applications. In particular,
we present a case that links the two observations: small and large batch
gradient descent appear to converge to different basins of attraction but we
show that they are in fact connected through their flat region and so belong to
the same basin.},
added-at = {2019-10-22T04:02:17.000+0200},
author = {Sagun, Levent and Evci, Utku and Guney, V. Ugur and Dauphin, Yann and Bottou, Leon},
biburl = {https://www.bibsonomy.org/bibtex/2177254d0109f87bda993cad9ab014bda/kirk86},
description = {[1706.04454] Empirical Analysis of the Hessian of Over-Parametrized Neural Networks},
interhash = {cb469da28d6ad79be1b3e35c32ec4d86},
intrahash = {177254d0109f87bda993cad9ab014bda},
keywords = {dynamic non-linear optimization readings},
note = {cite arxiv:1706.04454Comment: Minor update for ICLR 2018 Workshop Track presentation},
timestamp = {2019-10-22T04:02:17.000+0200},
title = {Empirical Analysis of the Hessian of Over-Parametrized Neural Networks},
url = {http://arxiv.org/abs/1706.04454},
year = 2017
}