On How Well Generative Adversarial Networks Learn Densities:
Nonparametric and Parametric Results
T. Liang. (2018)cite arxiv:1811.03179Comment: 36 pages, 3 figures.
Аннотация
We study in this paper the rate of convergence for learning distributions
with the adversarial framework and Generative Adversarial Networks (GANs),
which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a
wide range of parametric and nonparametric target distributions, under a
collection of objective evaluation metrics. On the nonparametric end, we
investigate the minimax optimal rates and fundamental difficulty of the density
estimation under the adversarial framework. On the parametric end, we establish
a theory for general neural network classes (including deep leaky ReLU as a
special case), that characterizes the interplay on the choice of generator and
discriminator. We investigate how to obtain a good statistical guarantee for
GANs through the lens of regularization. We discover and isolate a new notion
of regularization, called the generator/discriminator pair
regularization, that sheds light on the advantage of GANs compared to
classical parametric and nonparametric approaches for density estimation. We
develop novel oracle inequalities as the main tools for analyzing GANs, which
is of independent theoretical interest.
Описание
[1811.03179] On How Well Generative Adversarial Networks Learn Densities: Nonparametric and Parametric Results
%0 Journal Article
%1 liang2018generative
%A Liang, Tengyuan
%D 2018
%K adversarial generative-models readings sampling
%T On How Well Generative Adversarial Networks Learn Densities:
Nonparametric and Parametric Results
%U http://arxiv.org/abs/1811.03179
%X We study in this paper the rate of convergence for learning distributions
with the adversarial framework and Generative Adversarial Networks (GANs),
which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a
wide range of parametric and nonparametric target distributions, under a
collection of objective evaluation metrics. On the nonparametric end, we
investigate the minimax optimal rates and fundamental difficulty of the density
estimation under the adversarial framework. On the parametric end, we establish
a theory for general neural network classes (including deep leaky ReLU as a
special case), that characterizes the interplay on the choice of generator and
discriminator. We investigate how to obtain a good statistical guarantee for
GANs through the lens of regularization. We discover and isolate a new notion
of regularization, called the generator/discriminator pair
regularization, that sheds light on the advantage of GANs compared to
classical parametric and nonparametric approaches for density estimation. We
develop novel oracle inequalities as the main tools for analyzing GANs, which
is of independent theoretical interest.
@article{liang2018generative,
abstract = {We study in this paper the rate of convergence for learning distributions
with the adversarial framework and Generative Adversarial Networks (GANs),
which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a
wide range of parametric and nonparametric target distributions, under a
collection of objective evaluation metrics. On the nonparametric end, we
investigate the minimax optimal rates and fundamental difficulty of the density
estimation under the adversarial framework. On the parametric end, we establish
a theory for general neural network classes (including deep leaky ReLU as a
special case), that characterizes the interplay on the choice of generator and
discriminator. We investigate how to obtain a good statistical guarantee for
GANs through the lens of regularization. We discover and isolate a new notion
of regularization, called the \textit{generator/discriminator pair
regularization}, that sheds light on the advantage of GANs compared to
classical parametric and nonparametric approaches for density estimation. We
develop novel oracle inequalities as the main tools for analyzing GANs, which
is of independent theoretical interest.},
added-at = {2020-02-16T19:57:16.000+0100},
author = {Liang, Tengyuan},
biburl = {https://www.bibsonomy.org/bibtex/246cda3067e19893b0d2cc4046cb6fdf9/kirk86},
description = {[1811.03179] On How Well Generative Adversarial Networks Learn Densities: Nonparametric and Parametric Results},
interhash = {69151d1332009f837e8aae127914d265},
intrahash = {46cda3067e19893b0d2cc4046cb6fdf9},
keywords = {adversarial generative-models readings sampling},
note = {cite arxiv:1811.03179Comment: 36 pages, 3 figures},
timestamp = {2020-02-16T19:57:16.000+0100},
title = {On How Well Generative Adversarial Networks Learn Densities:
Nonparametric and Parametric Results},
url = {http://arxiv.org/abs/1811.03179},
year = 2018
}