Misc,

Parallel training of DNNs with Natural Gradient and Parameter Averaging

, , and .
(2014)cite arxiv:1410.7455Comment: Accepted as workshop contribution to ICLR 2015. 12 pages plus 16 pages of appendices, International Conference on Learning Representations (ICLR): Workshop track, 2015. 2 sets of minor fixes post-publication..

Abstract

We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multi-core machines. In order to be as hardware-agnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.

Tags

Users

  • @alrigazzi

Comments and Reviews