Abstract
We describe the neural-network training framework used in the Kaldi speech
recognition toolkit, which is geared towards training DNNs with large amounts
of training data using multiple GPU-equipped or multi-core machines. In order
to be as hardware-agnostic as possible, we needed a way to use multiple
machines without generating excessive network traffic. Our method is to average
the neural network parameters periodically (typically every minute or two), and
redistribute the averaged parameters to the machines for further training. Each
machine sees different data. By itself, this method does not work very well.
However, we have another method, an approximate and efficient implementation of
Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow
our periodic-averaging method to work well, as well as substantially improving
the convergence of SGD on a single machine.
Users
Please
log in to take part in the discussion (add own reviews or comments).