@kirk86

Batch Normalization Biases Deep Residual Networks Towards Shallow Paths

, and . (2020)cite arxiv:2002.10444.

Abstract

Batch normalization has multiple benefits. It improves the conditioning of the loss landscape, and is a surprisingly effective regularizer. However, the most important benefit of batch normalization arises in residual networks, where it dramatically increases the largest trainable depth. We identify the origin of this benefit: At initialization, batch normalization downscales the residual branch relative to the skip connection, by a normalizing factor proportional to the square root of the network depth. This ensures that, early in training, the function computed by deep normalized residual networks is dominated by shallow paths with well-behaved gradients. We use this insight to develop a simple initialization scheme which can train very deep residual networks without normalization. We also clarify that, although batch normalization does enable stable training with larger learning rates, this benefit is only useful when one wishes to parallelize training over large batch sizes. Our results help isolate the distinct benefits of batch normalization in different architectures.

Description

[2002.10444] Batch Normalization Biases Deep Residual Networks Towards Shallow Paths

Links and resources

Tags

community

  • @kirk86
  • @dblp
@kirk86's tags highlighted