Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation
K. Stelzner, K. Kersting, and A. Kosiorek. (2021)cite arxiv:2104.01148Comment: 15 pages, 3 figures. For project page with videos, see http://stelzner.github.io/obsurf/.
Abstract
We present ObSuRF, a method which turns a single image of a scene into a 3D
model represented as a set of Neural Radiance Fields (NeRFs), with each NeRF
corresponding to a different object. A single forward pass of an encoder
network outputs a set of latent vectors describing the objects in the scene.
These vectors are used independently to condition a NeRF decoder, defining the
geometry and appearance of each object. We make learning more computationally
efficient by deriving a novel loss, which allows training NeRFs on RGB-D inputs
without explicit ray marching. After confirming that the model performs equal
or better than state of the art on three 2D image segmentation benchmarks, we
apply it to two multi-object 3D datasets: A multiview version of CLEVR, and a
novel dataset in which scenes are populated by ShapeNet models. We find that
after training ObSuRF on RGB-D views of training scenes, it is capable of not
only recovering the 3D geometry of a scene depicted in a single input image,
but also to segment it into objects, despite receiving no supervision in that
regard.
Description
[2104.01148] Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation
%0 Generic
%1 stelzner2021decomposing
%A Stelzner, Karl
%A Kersting, Kristian
%A Kosiorek, Adam R.
%D 2021
%K 2021 3D NeRF segmentation unsupervised
%T Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation
%U http://arxiv.org/abs/2104.01148
%X We present ObSuRF, a method which turns a single image of a scene into a 3D
model represented as a set of Neural Radiance Fields (NeRFs), with each NeRF
corresponding to a different object. A single forward pass of an encoder
network outputs a set of latent vectors describing the objects in the scene.
These vectors are used independently to condition a NeRF decoder, defining the
geometry and appearance of each object. We make learning more computationally
efficient by deriving a novel loss, which allows training NeRFs on RGB-D inputs
without explicit ray marching. After confirming that the model performs equal
or better than state of the art on three 2D image segmentation benchmarks, we
apply it to two multi-object 3D datasets: A multiview version of CLEVR, and a
novel dataset in which scenes are populated by ShapeNet models. We find that
after training ObSuRF on RGB-D views of training scenes, it is capable of not
only recovering the 3D geometry of a scene depicted in a single input image,
but also to segment it into objects, despite receiving no supervision in that
regard.
@misc{stelzner2021decomposing,
abstract = {We present ObSuRF, a method which turns a single image of a scene into a 3D
model represented as a set of Neural Radiance Fields (NeRFs), with each NeRF
corresponding to a different object. A single forward pass of an encoder
network outputs a set of latent vectors describing the objects in the scene.
These vectors are used independently to condition a NeRF decoder, defining the
geometry and appearance of each object. We make learning more computationally
efficient by deriving a novel loss, which allows training NeRFs on RGB-D inputs
without explicit ray marching. After confirming that the model performs equal
or better than state of the art on three 2D image segmentation benchmarks, we
apply it to two multi-object 3D datasets: A multiview version of CLEVR, and a
novel dataset in which scenes are populated by ShapeNet models. We find that
after training ObSuRF on RGB-D views of training scenes, it is capable of not
only recovering the 3D geometry of a scene depicted in a single input image,
but also to segment it into objects, despite receiving no supervision in that
regard.},
added-at = {2021-04-05T07:20:40.000+0200},
author = {Stelzner, Karl and Kersting, Kristian and Kosiorek, Adam R.},
biburl = {https://www.bibsonomy.org/bibtex/2dfebe2effd85fb09e0fa49fc4d63821a/analyst},
description = {[2104.01148] Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation},
interhash = {c59bc2982bf215ca5e279ea64a4b4ad8},
intrahash = {dfebe2effd85fb09e0fa49fc4d63821a},
keywords = {2021 3D NeRF segmentation unsupervised},
note = {cite arxiv:2104.01148Comment: 15 pages, 3 figures. For project page with videos, see http://stelzner.github.io/obsurf/},
timestamp = {2021-04-05T07:20:40.000+0200},
title = {Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation},
url = {http://arxiv.org/abs/2104.01148},
year = 2021
}