Abstract
We study in this paper the rate of convergence for learning distributions
with the adversarial framework and Generative Adversarial Networks (GANs),
which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a
wide range of parametric and nonparametric target distributions, under a
collection of objective evaluation metrics. On the nonparametric end, we
investigate the minimax optimal rates and fundamental difficulty of the density
estimation under the adversarial framework. On the parametric end, we establish
a theory for general neural network classes (including deep leaky ReLU as a
special case), that characterizes the interplay on the choice of generator and
discriminator. We investigate how to obtain a good statistical guarantee for
GANs through the lens of regularization. We discover and isolate a new notion
of regularization, called the generator/discriminator pair
regularization, that sheds light on the advantage of GANs compared to
classical parametric and nonparametric approaches for density estimation. We
develop novel oracle inequalities as the main tools for analyzing GANs, which
is of independent theoretical interest.
Users
Please
log in to take part in the discussion (add own reviews or comments).