Tensor Programs II: Neural Tangent Kernel for Any Architecture
G. Yang. (2020)cite arxiv:2006.14548Comment: 11 pages of main text. 60 pages total.
Abstract
We prove that a randomly initialized neural network of *any architecture* has
its Tangent Kernel (NTK) converge to a deterministic limit, as the network
widths tend to infinity. We demonstrate how to calculate this limit. In prior
literature, the heuristic study of neural network gradients often assumes every
weight matrix used in forward propagation is independent from its transpose
used in backpropagation (Schoenholz et al. 2017). This is known as the
*gradient independence assumption (GIA)*. We identify a commonly satisfied
condition, which we call *Simple GIA Check*, such that the NTK limit
calculation based on GIA is correct. Conversely, when Simple GIA Check fails,
we show GIA can result in wrong answers. Our material here presents the NTK
results of Yang (2019a) in a friendly manner and showcases the *tensor
programs* technique for understanding wide neural networks. We provide
reference implementations of infinite-width NTKs of recurrent neural network,
transformer, and batch normalization at https://github.com/thegregyang/NTK4A.
Description
[2006.14548] Tensor Programs II: Neural Tangent Kernel for Any Architecture
%0 Journal Article
%1 yang2020tensor
%A Yang, Greg
%D 2020
%K dynamic gradients ntk
%T Tensor Programs II: Neural Tangent Kernel for Any Architecture
%U http://arxiv.org/abs/2006.14548
%X We prove that a randomly initialized neural network of *any architecture* has
its Tangent Kernel (NTK) converge to a deterministic limit, as the network
widths tend to infinity. We demonstrate how to calculate this limit. In prior
literature, the heuristic study of neural network gradients often assumes every
weight matrix used in forward propagation is independent from its transpose
used in backpropagation (Schoenholz et al. 2017). This is known as the
*gradient independence assumption (GIA)*. We identify a commonly satisfied
condition, which we call *Simple GIA Check*, such that the NTK limit
calculation based on GIA is correct. Conversely, when Simple GIA Check fails,
we show GIA can result in wrong answers. Our material here presents the NTK
results of Yang (2019a) in a friendly manner and showcases the *tensor
programs* technique for understanding wide neural networks. We provide
reference implementations of infinite-width NTKs of recurrent neural network,
transformer, and batch normalization at https://github.com/thegregyang/NTK4A.
@article{yang2020tensor,
abstract = {We prove that a randomly initialized neural network of *any architecture* has
its Tangent Kernel (NTK) converge to a deterministic limit, as the network
widths tend to infinity. We demonstrate how to calculate this limit. In prior
literature, the heuristic study of neural network gradients often assumes every
weight matrix used in forward propagation is independent from its transpose
used in backpropagation (Schoenholz et al. 2017). This is known as the
*gradient independence assumption (GIA)*. We identify a commonly satisfied
condition, which we call *Simple GIA Check*, such that the NTK limit
calculation based on GIA is correct. Conversely, when Simple GIA Check fails,
we show GIA can result in wrong answers. Our material here presents the NTK
results of Yang (2019a) in a friendly manner and showcases the *tensor
programs* technique for understanding wide neural networks. We provide
reference implementations of infinite-width NTKs of recurrent neural network,
transformer, and batch normalization at https://github.com/thegregyang/NTK4A.},
added-at = {2020-08-01T01:13:00.000+0200},
author = {Yang, Greg},
biburl = {https://www.bibsonomy.org/bibtex/23502369769c4148d7e9b6fb125a7c6bd/kirk86},
description = {[2006.14548] Tensor Programs II: Neural Tangent Kernel for Any Architecture},
interhash = {db6d3daf878c2dbc2bea4bd3554a8c0b},
intrahash = {3502369769c4148d7e9b6fb125a7c6bd},
keywords = {dynamic gradients ntk},
note = {cite arxiv:2006.14548Comment: 11 pages of main text. 60 pages total},
timestamp = {2020-08-01T01:13:00.000+0200},
title = {Tensor Programs II: Neural Tangent Kernel for Any Architecture},
url = {http://arxiv.org/abs/2006.14548},
year = 2020
}