The grokking phenomenon as reported by Power et al. ( arXiv:2201.02177 )
refers to a regime where a long period of overfitting is followed by a
seemingly sudden transition to perfect generalization. In this paper, we
attempt to reveal the underpinnings of Grokking via a series of empirical
studies. Specifically, we uncover an optimization anomaly plaguing adaptive
optimizers at extremely late stages of training, referred to as the Slingshot
Mechanism. A prominent artifact of the Slingshot Mechanism can be measured by
the cyclic phase transitions between stable and unstable training regimes, and
can be easily monitored by the cyclic behavior of the norm of the last layers
weights. We empirically observe that without explicit regularization, Grokking
as reported in ( arXiv:2201.02177 ) almost exclusively happens at the onset of
Slingshots, and is absent without it. While common and easily reproduced in
more general settings, the Slingshot Mechanism does not follow from any known
optimization theories that we are aware of, and can be easily overlooked
without an in depth examination. Our work points to a surprising and useful
inductive bias of adaptive gradient optimizers at late stages of training,
calling for a revised theoretical analysis of their origin.
Description
The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the Grokking Phenomenon
%0 Generic
%1 thilak2022slingshot
%A Thilak, Vimal
%A Littwin, Etai
%A Zhai, Shuangfei
%A Saremi, Omid
%A Paiss, Roni
%A Susskind, Joshua
%D 2022
%K clustering machine-learning
%T The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and
the Grokking Phenomenon
%U http://arxiv.org/abs/2206.04817
%X The grokking phenomenon as reported by Power et al. ( arXiv:2201.02177 )
refers to a regime where a long period of overfitting is followed by a
seemingly sudden transition to perfect generalization. In this paper, we
attempt to reveal the underpinnings of Grokking via a series of empirical
studies. Specifically, we uncover an optimization anomaly plaguing adaptive
optimizers at extremely late stages of training, referred to as the Slingshot
Mechanism. A prominent artifact of the Slingshot Mechanism can be measured by
the cyclic phase transitions between stable and unstable training regimes, and
can be easily monitored by the cyclic behavior of the norm of the last layers
weights. We empirically observe that without explicit regularization, Grokking
as reported in ( arXiv:2201.02177 ) almost exclusively happens at the onset of
Slingshots, and is absent without it. While common and easily reproduced in
more general settings, the Slingshot Mechanism does not follow from any known
optimization theories that we are aware of, and can be easily overlooked
without an in depth examination. Our work points to a surprising and useful
inductive bias of adaptive gradient optimizers at late stages of training,
calling for a revised theoretical analysis of their origin.
@misc{thilak2022slingshot,
abstract = {The grokking phenomenon as reported by Power et al. ( arXiv:2201.02177 )
refers to a regime where a long period of overfitting is followed by a
seemingly sudden transition to perfect generalization. In this paper, we
attempt to reveal the underpinnings of Grokking via a series of empirical
studies. Specifically, we uncover an optimization anomaly plaguing adaptive
optimizers at extremely late stages of training, referred to as the Slingshot
Mechanism. A prominent artifact of the Slingshot Mechanism can be measured by
the cyclic phase transitions between stable and unstable training regimes, and
can be easily monitored by the cyclic behavior of the norm of the last layers
weights. We empirically observe that without explicit regularization, Grokking
as reported in ( arXiv:2201.02177 ) almost exclusively happens at the onset of
Slingshots, and is absent without it. While common and easily reproduced in
more general settings, the Slingshot Mechanism does not follow from any known
optimization theories that we are aware of, and can be easily overlooked
without an in depth examination. Our work points to a surprising and useful
inductive bias of adaptive gradient optimizers at late stages of training,
calling for a revised theoretical analysis of their origin.},
added-at = {2022-07-11T10:15:30.000+0200},
author = {Thilak, Vimal and Littwin, Etai and Zhai, Shuangfei and Saremi, Omid and Paiss, Roni and Susskind, Joshua},
biburl = {https://www.bibsonomy.org/bibtex/2f1a4a6a0c68438f1ae4b4aadfdb585b0/jpbarrettel},
description = {The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the Grokking Phenomenon},
interhash = {d2e94fb9b15ad338bdf943a5bcb22c2f},
intrahash = {f1a4a6a0c68438f1ae4b4aadfdb585b0},
keywords = {clustering machine-learning},
note = {cite arxiv:2206.04817Comment: Removed Tex formatting commands in title Title and Abstract},
timestamp = {2022-07-11T10:15:30.000+0200},
title = {The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and
the Grokking Phenomenon},
url = {http://arxiv.org/abs/2206.04817},
year = 2022
}