An exact mapping between the Variational Renormalization Group and Deep
Learning
P. Mehta, and D. Schwab. (2014)cite arxiv:1410.3831Comment: 8 pages, 3 figures.
Abstract
Deep learning is a broad set of techniques that uses multiple layers of
representation to automatically learn relevant features directly from
structured data. Recently, such techniques have yielded record-breaking results
on a diverse set of difficult machine learning tasks in computer vision, speech
recognition, and natural language processing. Despite the enormous success of
deep learning, relatively little is understood theoretically about why these
techniques are so successful at feature learning and compression. Here, we show
that deep learning is intimately related to one of the most important and
successful techniques in theoretical physics, the renormalization group (RG).
RG is an iterative coarse-graining scheme that allows for the extraction of
relevant features (i.e. operators) as a physical system is examined at
different length scales. We construct an exact mapping from the variational
renormalization group, first introduced by Kadanoff, and deep learning
architectures based on Restricted Boltzmann Machines (RBMs). We illustrate
these ideas using the nearest-neighbor Ising Model in one and two-dimensions.
Our results suggests that deep learning algorithms may be employing a
generalized RG-like scheme to learn relevant features from data.
Description
An exact mapping between the Variational Renormalization Group and Deep Learning
%0 Generic
%1 mehta2014exact
%A Mehta, Pankaj
%A Schwab, David J.
%D 2014
%K DL
%T An exact mapping between the Variational Renormalization Group and Deep
Learning
%U http://arxiv.org/abs/1410.3831
%X Deep learning is a broad set of techniques that uses multiple layers of
representation to automatically learn relevant features directly from
structured data. Recently, such techniques have yielded record-breaking results
on a diverse set of difficult machine learning tasks in computer vision, speech
recognition, and natural language processing. Despite the enormous success of
deep learning, relatively little is understood theoretically about why these
techniques are so successful at feature learning and compression. Here, we show
that deep learning is intimately related to one of the most important and
successful techniques in theoretical physics, the renormalization group (RG).
RG is an iterative coarse-graining scheme that allows for the extraction of
relevant features (i.e. operators) as a physical system is examined at
different length scales. We construct an exact mapping from the variational
renormalization group, first introduced by Kadanoff, and deep learning
architectures based on Restricted Boltzmann Machines (RBMs). We illustrate
these ideas using the nearest-neighbor Ising Model in one and two-dimensions.
Our results suggests that deep learning algorithms may be employing a
generalized RG-like scheme to learn relevant features from data.
@misc{mehta2014exact,
abstract = {Deep learning is a broad set of techniques that uses multiple layers of
representation to automatically learn relevant features directly from
structured data. Recently, such techniques have yielded record-breaking results
on a diverse set of difficult machine learning tasks in computer vision, speech
recognition, and natural language processing. Despite the enormous success of
deep learning, relatively little is understood theoretically about why these
techniques are so successful at feature learning and compression. Here, we show
that deep learning is intimately related to one of the most important and
successful techniques in theoretical physics, the renormalization group (RG).
RG is an iterative coarse-graining scheme that allows for the extraction of
relevant features (i.e. operators) as a physical system is examined at
different length scales. We construct an exact mapping from the variational
renormalization group, first introduced by Kadanoff, and deep learning
architectures based on Restricted Boltzmann Machines (RBMs). We illustrate
these ideas using the nearest-neighbor Ising Model in one and two-dimensions.
Our results suggests that deep learning algorithms may be employing a
generalized RG-like scheme to learn relevant features from data.},
added-at = {2020-01-09T10:39:11.000+0100},
author = {Mehta, Pankaj and Schwab, David J.},
biburl = {https://www.bibsonomy.org/bibtex/2e023e1d0dc1c88f945fd9307a2728b93/rpennec},
description = {An exact mapping between the Variational Renormalization Group and Deep Learning},
interhash = {d5e5bcb3dfbe472d4fbd5101f3fa269a},
intrahash = {e023e1d0dc1c88f945fd9307a2728b93},
keywords = {DL},
note = {cite arxiv:1410.3831Comment: 8 pages, 3 figures},
timestamp = {2020-01-09T10:39:11.000+0100},
title = {An exact mapping between the Variational Renormalization Group and Deep
Learning},
url = {http://arxiv.org/abs/1410.3831},
year = 2014
}