Data augmentation instead of explicit regularization
A. Hernández-García, and P. König. (2018)cite arxiv:1806.03852Comment: Major changes: 1. New section (3. Theoretical insights), with theoretical insights from statistical learning theory 2. Supplementary material is restructured; new section with a discussion about regularization taxonomy 3. The overall text has been revised, hopefully improved.
Abstract
Modern deep artificial neural networks have achieved impressive results
through models with orders of magnitude more parameters than training examples
which control overfitting with the help of regularization. Regularization can
be implicit, as is the case of stochastic gradient descent and parameter
sharing in convolutional layers, or explicit. Explicit regularization
techniques, most common forms are weight decay and dropout, have proven
successful in terms of improved generalization, but they blindly reduce the
effective capacity of the model, introduce sensitive hyper-parameters and
require deeper and wider architectures to compensate for the reduced capacity.
In contrast, data augmentation techniques exploit domain knowledge to increase
the number of training examples and improve generalization without reducing the
effective capacity and without introducing model-dependent parameters, since it
is applied on the training data. In this paper we systematically contrast data
augmentation and explicit regularization on three popular architectures and
three data sets. Our results demonstrate that data augmentation alone can
achieve the same performance or higher as regularized models and exhibits much
higher adaptability to changes in the architecture and the amount of training
data.
Description
[1806.03852v4] Data augmentation instead of explicit regularization
cite arxiv:1806.03852Comment: Major changes: 1. New section (3. Theoretical insights), with theoretical insights from statistical learning theory 2. Supplementary material is restructured; new section with a discussion about regularization taxonomy 3. The overall text has been revised, hopefully improved
%0 Journal Article
%1 hernandezgarcia2018augmentation
%A Hernández-García, Alex
%A König, Peter
%D 2018
%K augmentation regularisation
%T Data augmentation instead of explicit regularization
%U http://arxiv.org/abs/1806.03852
%X Modern deep artificial neural networks have achieved impressive results
through models with orders of magnitude more parameters than training examples
which control overfitting with the help of regularization. Regularization can
be implicit, as is the case of stochastic gradient descent and parameter
sharing in convolutional layers, or explicit. Explicit regularization
techniques, most common forms are weight decay and dropout, have proven
successful in terms of improved generalization, but they blindly reduce the
effective capacity of the model, introduce sensitive hyper-parameters and
require deeper and wider architectures to compensate for the reduced capacity.
In contrast, data augmentation techniques exploit domain knowledge to increase
the number of training examples and improve generalization without reducing the
effective capacity and without introducing model-dependent parameters, since it
is applied on the training data. In this paper we systematically contrast data
augmentation and explicit regularization on three popular architectures and
three data sets. Our results demonstrate that data augmentation alone can
achieve the same performance or higher as regularized models and exhibits much
higher adaptability to changes in the architecture and the amount of training
data.
@article{hernandezgarcia2018augmentation,
abstract = {Modern deep artificial neural networks have achieved impressive results
through models with orders of magnitude more parameters than training examples
which control overfitting with the help of regularization. Regularization can
be implicit, as is the case of stochastic gradient descent and parameter
sharing in convolutional layers, or explicit. Explicit regularization
techniques, most common forms are weight decay and dropout, have proven
successful in terms of improved generalization, but they blindly reduce the
effective capacity of the model, introduce sensitive hyper-parameters and
require deeper and wider architectures to compensate for the reduced capacity.
In contrast, data augmentation techniques exploit domain knowledge to increase
the number of training examples and improve generalization without reducing the
effective capacity and without introducing model-dependent parameters, since it
is applied on the training data. In this paper we systematically contrast data
augmentation and explicit regularization on three popular architectures and
three data sets. Our results demonstrate that data augmentation alone can
achieve the same performance or higher as regularized models and exhibits much
higher adaptability to changes in the architecture and the amount of training
data.},
added-at = {2019-08-23T16:51:00.000+0200},
author = {Hernández-García, Alex and König, Peter},
biburl = {https://www.bibsonomy.org/bibtex/253ddda096e415e0437d93fb59823ba45/kirk86},
description = {[1806.03852v4] Data augmentation instead of explicit regularization},
interhash = {e82fdbced2db74fe4a29d1d4809273bf},
intrahash = {53ddda096e415e0437d93fb59823ba45},
keywords = {augmentation regularisation},
note = {cite arxiv:1806.03852Comment: Major changes: 1. New section (3. Theoretical insights), with theoretical insights from statistical learning theory 2. Supplementary material is restructured; new section with a discussion about regularization taxonomy 3. The overall text has been revised, hopefully improved},
timestamp = {2019-09-26T16:00:39.000+0200},
title = {Data augmentation instead of explicit regularization},
url = {http://arxiv.org/abs/1806.03852},
year = 2018
}