Abstract
We introduce "instability analysis," a framework for assessing whether the
outcome of optimizing a neural network is robust to SGD noise. It entails
training two copies of a network on different random data orders. If error does
not increase along the linear path between the trained parameters, we say the
network is "stable." Instability analysis reveals new properties of neural
networks. For example, standard vision models are initially unstable but become
stable early in training; from then on, the outcome of optimization is
determined up to linear interpolation. We leverage instability analysis to
examine iterative magnitude pruning (IMP), the procedure underlying the lottery
ticket hypothesis. On small vision tasks, IMP finds sparse "matching
subnetworks" that can train in isolation from initialization to full accuracy,
but it fails to do so in more challenging settings. We find that IMP
subnetworks are matching only when they are stable. In cases where IMP
subnetworks are unstable at initialization, they become stable and matching
early in training. We augment IMP to rewind subnetworks to their weights early
in training, producing sparse subnetworks of large-scale networks, including
Resnet-50 for ImageNet, that train to full accuracy.
This submission subsumes 1903.01611 ("Stabilizing the Lottery Ticket
Hypothesis" and "The Lottery Ticket Hypothesis at Scale").
Users
Please
log in to take part in the discussion (add own reviews or comments).