Article,

The general inefficiency of batch training for gradient descent learning

, and .
Neural Networks, 16 (10): 1429 - 1451 (2003)
DOI: http://dx.doi.org/10.1016/S0893-6080(03)00138-2

Abstract

Gradient descent training of neural networks can be done in either a batch or on-line manner. A widely held myth in the neural network community is that batch training is as fast or faster and/or more ‘correct’ than on-line training because it supposedly uses a better approximation of the true gradient for its weight updates. This paper explains why batch training is almost always slower than on-line training—often orders of magnitude slower—especially on large training sets. The main reason is due to the ability of on-line training to follow curves in the error surface throughout each epoch, which allows it to safely use a larger learning rate and thus converge with less iterations through the training data. Empirical results on a large (20,000-instance) speech recognition task and on 26 other learning tasks demonstrate that convergence can be reached significantly faster using on-line training than batch training, with no apparent difference in accuracy.

Tags

Users

  • @pl_itwm
  • @thoni
  • @nosebrain

Comments and Reviews