@kirk86

On the Convergence of Adam and Adagrad

, , , and . (2020)cite arxiv:2003.02395Comment: 19 pages, 0 figures, preprint version.

Abstract

We provide a simple proof of the convergence of the optimization algorithms Adam and Adagrad with the assumptions of smooth gradients and almost sure uniform bound on the $\ell_ınfty$ norm of the gradients. This work builds on the techniques introduced by Ward et al. (2019) and extends them to the Adam optimizer. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations N. This bound can be made arbitrarily small. In particular, Adam with a learning rate $\alpha=1/N$ and a momentum parameter on squared gradients $\beta_2=1 - 1/N$ achieves the same rate of convergence $O(łn(N)/N)$ as Adagrad. Thus, it is possible to use Adam as a finite horizon version of Adagrad, much like constant step size SGD can be used instead of its asymptotically converging decaying step size version.

Description

[2003.02395] On the Convergence of Adam and Adagrad

Links and resources

Tags

community

  • @kirk86
  • @dblp
@kirk86's tags highlighted