On the difficulty of training Recurrent Neural Networks
R. Pascanu, T. Mikolov, and Y. Bengio. (2012)cite arxiv:1211.5063Comment: Improved description of the exploding gradient problem and description and analysis of the vanishing gradient problem.
Abstract
There are two widely known issues with properly training Recurrent Neural
Networks, the vanishing and the exploding gradient problems detailed in Bengio
et al. (1994). In this paper we attempt to improve the understanding of the
underlying issues by exploring these problems from an analytical, a geometric
and a dynamical systems perspective. Our analysis is used to justify a simple
yet effective solution. We propose a gradient norm clipping strategy to deal
with exploding gradients and a soft constraint for the vanishing gradients
problem. We validate empirically our hypothesis and proposed solutions in the
experimental section.
Description
On the difficulty of training Recurrent Neural Networks - 1211.5063.pdf
%0 Generic
%1 pascanu2012difficulty
%A Pascanu, Razvan
%A Mikolov, Tomas
%A Bengio, Yoshua
%D 2012
%K backpropagation deep_learning gradients mlnlp rnn
%T On the difficulty of training Recurrent Neural Networks
%U http://arxiv.org/abs/1211.5063
%X There are two widely known issues with properly training Recurrent Neural
Networks, the vanishing and the exploding gradient problems detailed in Bengio
et al. (1994). In this paper we attempt to improve the understanding of the
underlying issues by exploring these problems from an analytical, a geometric
and a dynamical systems perspective. Our analysis is used to justify a simple
yet effective solution. We propose a gradient norm clipping strategy to deal
with exploding gradients and a soft constraint for the vanishing gradients
problem. We validate empirically our hypothesis and proposed solutions in the
experimental section.
@misc{pascanu2012difficulty,
abstract = {There are two widely known issues with properly training Recurrent Neural
Networks, the vanishing and the exploding gradient problems detailed in Bengio
et al. (1994). In this paper we attempt to improve the understanding of the
underlying issues by exploring these problems from an analytical, a geometric
and a dynamical systems perspective. Our analysis is used to justify a simple
yet effective solution. We propose a gradient norm clipping strategy to deal
with exploding gradients and a soft constraint for the vanishing gradients
problem. We validate empirically our hypothesis and proposed solutions in the
experimental section.},
added-at = {2018-05-22T20:47:51.000+0200},
author = {Pascanu, Razvan and Mikolov, Tomas and Bengio, Yoshua},
biburl = {https://www.bibsonomy.org/bibtex/2e6d07e805cb84e0b4ce318ebae329622/dallmann},
description = {On the difficulty of training Recurrent Neural Networks - 1211.5063.pdf},
interhash = {18f1d8d4975db3dfd5634a4c762e0b67},
intrahash = {e6d07e805cb84e0b4ce318ebae329622},
keywords = {backpropagation deep_learning gradients mlnlp rnn},
note = {cite arxiv:1211.5063Comment: Improved description of the exploding gradient problem and description and analysis of the vanishing gradient problem},
timestamp = {2018-05-22T20:47:51.000+0200},
title = {On the difficulty of training Recurrent Neural Networks},
url = {http://arxiv.org/abs/1211.5063},
year = 2012
}