Linear Mode Connectivity and the Lottery Ticket Hypothesis
J. Frankle, G. Dziugaite, D. Roy, and M. Carbin. (2019)cite arxiv:1912.05671Comment: This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket Hypothesis" and "The Lottery Ticket Hypothesis at Scale").
Abstract
We introduce "instability analysis," a framework for assessing whether the
outcome of optimizing a neural network is robust to SGD noise. It entails
training two copies of a network on different random data orders. If error does
not increase along the linear path between the trained parameters, we say the
network is "stable." Instability analysis reveals new properties of neural
networks. For example, standard vision models are initially unstable but become
stable early in training; from then on, the outcome of optimization is
determined up to linear interpolation. We leverage instability analysis to
examine iterative magnitude pruning (IMP), the procedure underlying the lottery
ticket hypothesis. On small vision tasks, IMP finds sparse "matching
subnetworks" that can train in isolation from initialization to full accuracy,
but it fails to do so in more challenging settings. We find that IMP
subnetworks are matching only when they are stable. In cases where IMP
subnetworks are unstable at initialization, they become stable and matching
early in training. We augment IMP to rewind subnetworks to their weights early
in training, producing sparse subnetworks of large-scale networks, including
Resnet-50 for ImageNet, that train to full accuracy.
This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket
Hypothesis" and "The Lottery Ticket Hypothesis at Scale").
Description
[1912.05671] Linear Mode Connectivity and the Lottery Ticket Hypothesis
cite arxiv:1912.05671Comment: This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket Hypothesis" and "The Lottery Ticket Hypothesis at Scale")
%0 Journal Article
%1 frankle2019linear
%A Frankle, Jonathan
%A Dziugaite, Gintare Karolina
%A Roy, Daniel M.
%A Carbin, Michael
%D 2019
%K compression generalization readings
%T Linear Mode Connectivity and the Lottery Ticket Hypothesis
%U http://arxiv.org/abs/1912.05671
%X We introduce "instability analysis," a framework for assessing whether the
outcome of optimizing a neural network is robust to SGD noise. It entails
training two copies of a network on different random data orders. If error does
not increase along the linear path between the trained parameters, we say the
network is "stable." Instability analysis reveals new properties of neural
networks. For example, standard vision models are initially unstable but become
stable early in training; from then on, the outcome of optimization is
determined up to linear interpolation. We leverage instability analysis to
examine iterative magnitude pruning (IMP), the procedure underlying the lottery
ticket hypothesis. On small vision tasks, IMP finds sparse "matching
subnetworks" that can train in isolation from initialization to full accuracy,
but it fails to do so in more challenging settings. We find that IMP
subnetworks are matching only when they are stable. In cases where IMP
subnetworks are unstable at initialization, they become stable and matching
early in training. We augment IMP to rewind subnetworks to their weights early
in training, producing sparse subnetworks of large-scale networks, including
Resnet-50 for ImageNet, that train to full accuracy.
This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket
Hypothesis" and "The Lottery Ticket Hypothesis at Scale").
@article{frankle2019linear,
abstract = {We introduce "instability analysis," a framework for assessing whether the
outcome of optimizing a neural network is robust to SGD noise. It entails
training two copies of a network on different random data orders. If error does
not increase along the linear path between the trained parameters, we say the
network is "stable." Instability analysis reveals new properties of neural
networks. For example, standard vision models are initially unstable but become
stable early in training; from then on, the outcome of optimization is
determined up to linear interpolation. We leverage instability analysis to
examine iterative magnitude pruning (IMP), the procedure underlying the lottery
ticket hypothesis. On small vision tasks, IMP finds sparse "matching
subnetworks" that can train in isolation from initialization to full accuracy,
but it fails to do so in more challenging settings. We find that IMP
subnetworks are matching only when they are stable. In cases where IMP
subnetworks are unstable at initialization, they become stable and matching
early in training. We augment IMP to rewind subnetworks to their weights early
in training, producing sparse subnetworks of large-scale networks, including
Resnet-50 for ImageNet, that train to full accuracy.
This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket
Hypothesis" and "The Lottery Ticket Hypothesis at Scale").},
added-at = {2020-02-17T03:10:49.000+0100},
author = {Frankle, Jonathan and Dziugaite, Gintare Karolina and Roy, Daniel M. and Carbin, Michael},
biburl = {https://www.bibsonomy.org/bibtex/226767393214dd2b3118971d1e87d9977/kirk86},
description = {[1912.05671] Linear Mode Connectivity and the Lottery Ticket Hypothesis},
interhash = {259277893977ee5b2f98b85d7561cb78},
intrahash = {26767393214dd2b3118971d1e87d9977},
keywords = {compression generalization readings},
note = {cite arxiv:1912.05671Comment: This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket Hypothesis" and "The Lottery Ticket Hypothesis at Scale")},
timestamp = {2020-02-17T03:10:49.000+0100},
title = {Linear Mode Connectivity and the Lottery Ticket Hypothesis},
url = {http://arxiv.org/abs/1912.05671},
year = 2019
}