Abstract
Although deep learning has produced dazzling successes for applications of
image, speech, and video processing in the past few years, most trainings are
with suboptimal hyper-parameters, requiring unnecessarily long training times.
Setting the hyper-parameters remains a black art that requires years of
experience to acquire. This report proposes several efficient ways to set the
hyper-parameters that significantly reduce training time and improves
performance. Specifically, this report shows how to examine the training
validation/test loss function for subtle clues of underfitting and overfitting
and suggests guidelines for moving toward the optimal balance point. Then it
discusses how to increase/decrease the learning rate/momentum to speed up
training. Our experiments show that it is crucial to balance every manner of
regularization for each dataset and architecture. Weight decay is used as a
sample regularizer to show how its optimal value is tightly coupled with the
learning rates and momentums. Files to help replicate the results reported here
are available.
Users
Please
log in to take part in the discussion (add own reviews or comments).