Abstract

We introduce a novel approach to training generative adversarial networks, where we train a generator to match a target distribution that converges to the data distribution at the limit of a perfect discriminator. This objective can be interpreted as training a generator to produce samples that lie on the decision boundary of a current discriminator in training at each update, and we call a GAN trained using this algorithm a boundary-seeking GAN (BS-GAN). This approach can be used to train a generator with discrete output when the generator outputs a parametric conditional distribution. We demonstrate the effectiveness of the proposed algorithm with discrete image data. In contrary to the proposed algorithm, we observe that the recently proposed Gumbel-Softmax technique for re-parametrizing the discrete variables does not work for training a GAN with discrete data. Finally, we notice that the proposed boundary-seeking algorithm works even with continuous variables, and demonstrate its effectiveness with two widely used image data sets, SVHN and CelebA.

Description

[1702.08431] Boundary-Seeking Generative Adversarial Networks

Links and resources

Tags

community

  • @dblp
  • @daschloer
  • @jonathandinu
@jonathandinu's tags highlighted