Zusammenfassung
Manifold assumption in learning states that: the data lie approximately on a
manifold of much lower dimension than the input space. Generative models learn
to generate data according to the underlying data distribution. Generative
models are used in various tasks, such as data augmentation and generating
variation of images. This paper addresses the following question: do generative
models need to be aware of the topology of the underlying data manifold in
which the data lie? This paper suggests that the answer is yes and demonstrates
that these can have ramifications on security-critical applications, such as
generative-model based defenses for adversarial examples. We provide
theoretical and experimental results to support our claims.
Nutzer