@jk_itwm

How Well Can Generative Adversarial Networks Learn Densities: A Nonparametric View

. (2017)cite arxiv:1712.08244Comment: 21 pages.

Abstract

We study in this paper the rate of convergence for learning densities under the Generative Adversarial Networks (GAN) framework, borrowing insights from nonparametric statistics. We introduce an improved GAN estimator that achieves a faster rate, through simultaneously leveraging the level of smoothness in the target density and the evaluation metric, which in theory remedies the mode collapse problem reported in the literature. A minimax lower bound is constructed to show that when the dimension is large, the exponent in the rate for the new GAN estimator is near optimal. One can view our results as answering in a quantitative way how well GAN learns a wide range of densities with different smoothness properties, under a hierarchy of evaluation metrics. As a byproduct, we also obtain improved generalization bounds for GAN with deeper ReLU discriminator network.

Description

How Well Can Generative Adversarial Networks Learn Densities: A Nonparametric View

Links and resources

Tags