Algorithm-Dependent Generalization Bounds for Overparameterized Deep
Residual Networks
S. Frei, Y. Cao, and Q. Gu. (2019)cite arxiv:1910.02934Comment: 37 pages. In NeurIPS 2019.
Abstract
The skip-connections used in residual networks have become a standard
architecture choice in deep learning due to the increased training stability
and generalization performance with this architecture, although there has been
limited theoretical understanding for this improvement. In this work, we
analyze overparameterized deep residual networks trained by gradient descent
following random initialization, and demonstrate that (i) the class of networks
learned by gradient descent constitutes a small subset of the entire neural
network function class, and (ii) this subclass of networks is sufficiently
large to guarantee small training error. By showing (i) we are able to
demonstrate that deep residual networks trained with gradient descent have a
small generalization gap between training and test error, and together with
(ii) this guarantees that the test error will be small. Our optimization and
generalization guarantees require overparameterization that is only logarithmic
in the depth of the network, while all known generalization bounds for deep
non-residual networks have overparameterization requirements that are at least
polynomial in the depth. This provides an explanation for why residual networks
are preferable to non-residual ones.
Description
[1910.02934] Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks
%0 Conference Paper
%1 frei2019algorithmdependent
%A Frei, Spencer
%A Cao, Yuan
%A Gu, Quanquan
%D 2019
%K bounds deep-learning generalization neurips2019 readings theory
%T Algorithm-Dependent Generalization Bounds for Overparameterized Deep
Residual Networks
%U http://arxiv.org/abs/1910.02934
%X The skip-connections used in residual networks have become a standard
architecture choice in deep learning due to the increased training stability
and generalization performance with this architecture, although there has been
limited theoretical understanding for this improvement. In this work, we
analyze overparameterized deep residual networks trained by gradient descent
following random initialization, and demonstrate that (i) the class of networks
learned by gradient descent constitutes a small subset of the entire neural
network function class, and (ii) this subclass of networks is sufficiently
large to guarantee small training error. By showing (i) we are able to
demonstrate that deep residual networks trained with gradient descent have a
small generalization gap between training and test error, and together with
(ii) this guarantees that the test error will be small. Our optimization and
generalization guarantees require overparameterization that is only logarithmic
in the depth of the network, while all known generalization bounds for deep
non-residual networks have overparameterization requirements that are at least
polynomial in the depth. This provides an explanation for why residual networks
are preferable to non-residual ones.
@inproceedings{frei2019algorithmdependent,
abstract = {The skip-connections used in residual networks have become a standard
architecture choice in deep learning due to the increased training stability
and generalization performance with this architecture, although there has been
limited theoretical understanding for this improvement. In this work, we
analyze overparameterized deep residual networks trained by gradient descent
following random initialization, and demonstrate that (i) the class of networks
learned by gradient descent constitutes a small subset of the entire neural
network function class, and (ii) this subclass of networks is sufficiently
large to guarantee small training error. By showing (i) we are able to
demonstrate that deep residual networks trained with gradient descent have a
small generalization gap between training and test error, and together with
(ii) this guarantees that the test error will be small. Our optimization and
generalization guarantees require overparameterization that is only logarithmic
in the depth of the network, while all known generalization bounds for deep
non-residual networks have overparameterization requirements that are at least
polynomial in the depth. This provides an explanation for why residual networks
are preferable to non-residual ones.},
added-at = {2019-11-28T19:03:19.000+0100},
author = {Frei, Spencer and Cao, Yuan and Gu, Quanquan},
biburl = {https://www.bibsonomy.org/bibtex/2124c2db7c2966f175a406d9f550f5a95/kirk86},
description = {[1910.02934] Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks},
interhash = {cba48dc1b4fe6010d1eeeb79fd22f9b2},
intrahash = {124c2db7c2966f175a406d9f550f5a95},
keywords = {bounds deep-learning generalization neurips2019 readings theory},
note = {cite arxiv:1910.02934Comment: 37 pages. In NeurIPS 2019},
timestamp = {2019-11-28T19:03:19.000+0100},
title = {Algorithm-Dependent Generalization Bounds for Overparameterized Deep
Residual Networks},
url = {http://arxiv.org/abs/1910.02934},
year = 2019
}