Abstract

Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep autoencoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods for 3D recognition tasks and enable basic shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation. We perform a thorough study of different generative models including: GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and Gaussian mixture models (GMM). For our quantitative evaluation we propose measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our careful evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs produce the best results.

Description

1707.02392.pdf

Links and resources

Tags

community