Zusammenfassung
Common nonlinear activation functions used in neural networks can cause
training difficulties due to the saturation behavior of the activation
function, which may hide dependencies that are not visible to vanilla-SGD
(using first order gradients only). Gating mechanisms that use softly
saturating activation functions to emulate the discrete switching of digital
logic circuits are good examples of this. We propose to exploit the injection
of appropriate noise so that the gradients may flow easily, even if the
noiseless application of the activation function would yield zero gradient.
Large noise will dominate the noise-free gradient and allow stochastic gradient
descent toexplore more. By adding noise only to the problematic parts of the
activation function, we allow the optimization procedure to explore the
boundary between the degenerate (saturating) and the well-behaved parts of the
activation function. We also establish connections to simulated annealing, when
the amount of noise is annealed down, making it easier to optimize hard
objective functions. We find experimentally that replacing such saturating
activation functions by noisy variants helps training in many contexts,
yielding state-of-the-art or competitive results on different datasets and
task, especially when training seems to be the most difficult, e.g., when
curriculum learning is necessary to obtain good results.
Nutzer