L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. (2017)cite arxiv:1703.04933Comment: 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix.
Abstract
Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties.
%0 Generic
%1 dinh2017sharp
%A Dinh, Laurent
%A Pascanu, Razvan
%A Bengio, Samy
%A Bengio, Yoshua
%D 2017
%K SGD large_batch optimization theory to_read
%T Sharp Minima Can Generalize For Deep Nets
%U http://arxiv.org/abs/1703.04933
%X Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties.
@misc{dinh2017sharp,
abstract = {Despite their overwhelming capacity to overfit, deep learning architectures
tend to generalize relatively well to unseen data, allowing them to be deployed
in practice. However, explaining why this is the case is still an open area of
research. One standing hypothesis that is gaining popularity, e.g. Hochreiter &
Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the
loss function found by stochastic gradient based methods results in good
generalization. This paper argues that most notions of flatness are problematic
for deep models and can not be directly applied to explain generalization.
Specifically, when focusing on deep networks with rectifier units, we can
exploit the particular geometry of parameter space induced by the inherent
symmetries that these architectures exhibit to build equivalent models
corresponding to arbitrarily sharper minima. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change drastically
without affecting its generalization properties.},
added-at = {2018-02-10T14:03:11.000+0100},
author = {Dinh, Laurent and Pascanu, Razvan and Bengio, Samy and Bengio, Yoshua},
biburl = {https://www.bibsonomy.org/bibtex/2215964524c1428f8fab336d2d067ac15/jk_itwm},
description = {Sharp Minima Can Generalize For Deep Nets},
interhash = {855fc5dfcc1768c8998426eb9c47042c},
intrahash = {215964524c1428f8fab336d2d067ac15},
keywords = {SGD large_batch optimization theory to_read},
note = {cite arxiv:1703.04933Comment: 8.5 pages of main content, 2.5 of bibliography and 1 page of appendix},
timestamp = {2018-02-10T14:04:26.000+0100},
title = {Sharp Minima Can Generalize For Deep Nets},
url = {http://arxiv.org/abs/1703.04933},
year = 2017
}