Abstract
Intriguing empirical evidence exists that deep learning can work well with
exoticschedules for varying the learning rate. This paper suggests that the
phenomenonmay be due to Batch Normalization or BN(Ioffe & Szegedy, 2015), which
is ubiquitous and provides benefits in optimization and generalization across
all standardarchitectures. The following new results are shown about BN with
weight decay and momentum (in other words, the typical use case which was not
considered inearlier theoretical analyses of stand-alone BN (Ioffe & Szegedy,
2015; Santurkaret al., 2018; Arora et al., 2018).
1. Training can be done using SGD with momentum and an exponentially
increasing learning rate schedule, i.e., learning rate increases by some $(1
+\alpha)$ factor in every epoch for some $>0$. (Precise statement in the
paper.) To the best of our knowledge this is the first time such a rate
schedule has been successfully used, let alone for highly successful
architectures. As expected, such training rapidly blows up network weights, but
the net stays well-behaved due to normalization.
2. Mathematical explanation of the success of the above rate schedule: a
rigorous proof that it is equivalent to the standard setting of BN + SGD +
StandardRate Tuning + Weight Decay + Momentum. This equivalence holds for other
normalization layers as well, Group Normalization(Wu & He, 2018),
LayerNormalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc.
3. A worked-out toy example illustrating the above linkage of
hyper-parameters. Using either weight decay or BN alone reaches global minimum,
but convergence fails when both are used.
Users
Please
log in to take part in the discussion (add own reviews or comments).