@analyst

Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation

, , and . (2021)cite arxiv:2104.01148Comment: 15 pages, 3 figures. For project page with videos, see http://stelzner.github.io/obsurf/.

Abstract

We present ObSuRF, a method which turns a single image of a scene into a 3D model represented as a set of Neural Radiance Fields (NeRFs), with each NeRF corresponding to a different object. A single forward pass of an encoder network outputs a set of latent vectors describing the objects in the scene. These vectors are used independently to condition a NeRF decoder, defining the geometry and appearance of each object. We make learning more computationally efficient by deriving a novel loss, which allows training NeRFs on RGB-D inputs without explicit ray marching. After confirming that the model performs equal or better than state of the art on three 2D image segmentation benchmarks, we apply it to two multi-object 3D datasets: A multiview version of CLEVR, and a novel dataset in which scenes are populated by ShapeNet models. We find that after training ObSuRF on RGB-D views of training scenes, it is capable of not only recovering the 3D geometry of a scene depicted in a single input image, but also to segment it into objects, despite receiving no supervision in that regard.

Description

[2104.01148] Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation

Links and resources

Tags

community

  • @analyst
  • @dblp
@analyst's tags highlighted