@daschloer

Towards Consistency of Adversarial Training for Generative Models

, and . (2017)cite arxiv:1705.09199.

Abstract

This work presents a rigorous statistical analysis of adversarial training for generative models, advancing recent work by Arjovsky and Bottou 2. A key element is the distinction between the objective function with respect to the (unknown) data distribution, and its empirical counterpart. This yields a straight-forward explanation for common pathologies in practical adversarial training such as vanishing gradients. To overcome such issues, we pursue the idea of smoothing the Jensen-Shannon Divergence (JSD) by incorporating noise in the formulation of the discriminator. As we show, this effectively leads to an empirical version of the JSD in which the true and the generator densities are replaced by kernel density estimates. We analyze statistical consistency of this objective, and demonstrate its practical effectiveness.

Description

Towards Consistency of Adversarial Training for Generative Models

Links and resources

Tags