Abstract
As its width tends to infinity, a deep neural network's behavior under
gradient descent can become simplified and predictable (e.g. given by the
Neural Tangent Kernel (NTK)), if it is parametrized appropriately (e.g. the NTK
parametrization). However, we show that the standard and NTK parametrizations
of a neural network do not admit infinite-width limits that can learn features,
which is crucial for pretraining and transfer learning such as with BERT. We
propose simple modifications to the standard parametrization to allow for
feature learning in the limit. Using the *Tensor Programs* technique, we derive
explicit formulas for such limits. On Word2Vec and few-shot learning on
Omniglot via MAML, two canonical tasks that rely crucially on feature learning,
we compute these limits exactly. We find that they outperform both NTK
baselines and finite-width networks, with the latter approaching the
infinite-width feature learning performance as width increases.
More generally, we classify a natural space of neural network
parametrizations that generalizes standard, NTK, and Mean Field
parametrizations. We show 1) any parametrization in this space either admits
feature learning or has an infinite-width training dynamics given by kernel
gradient descent, but not both; 2) any such infinite-width limit can be
computed using the Tensor Programs technique. Code for our experiments can be
found at github.com/edwardjhu/TP4.
Description
Feature Learning in Infinite-Width Neural Networks
Links and resources
Tags
community