Implicit Geometric Regularization for Learning Shapes
A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman. (2020)cite arxiv:2002.10099Comment: 37th International Conference on Machine Learning, Vienna, Austria, 2020.
Abstract
Representing shapes as level sets of neural networks has been recently proved
to be useful for different shape analysis and reconstruction tasks. So far,
such representations were computed using either: (i) pre-computed implicit
shape representations; or (ii) loss functions explicitly defined over the
neural level sets. In this paper we offer a new paradigm for computing high
fidelity implicit neural representations directly from raw data (i.e., point
clouds, with or without normal information). We observe that a rather simple
loss function, encouraging the neural network to vanish on the input point
cloud and to have a unit norm gradient, possesses an implicit geometric
regularization property that favors smooth and natural zero level set surfaces,
avoiding bad zero-loss solutions. We provide a theoretical analysis of this
property for the linear case, and show that, in practice, our method leads to
state of the art implicit neural representations with higher level-of-details
and fidelity compared to previous methods.
Description
Implicit Geometric Regularization for Learning Shapes
%0 Generic
%1 gropp2020implicit
%A Gropp, Amos
%A Yariv, Lior
%A Haim, Niv
%A Atzmon, Matan
%A Lipman, Yaron
%D 2020
%K nerf
%T Implicit Geometric Regularization for Learning Shapes
%U http://arxiv.org/abs/2002.10099
%X Representing shapes as level sets of neural networks has been recently proved
to be useful for different shape analysis and reconstruction tasks. So far,
such representations were computed using either: (i) pre-computed implicit
shape representations; or (ii) loss functions explicitly defined over the
neural level sets. In this paper we offer a new paradigm for computing high
fidelity implicit neural representations directly from raw data (i.e., point
clouds, with or without normal information). We observe that a rather simple
loss function, encouraging the neural network to vanish on the input point
cloud and to have a unit norm gradient, possesses an implicit geometric
regularization property that favors smooth and natural zero level set surfaces,
avoiding bad zero-loss solutions. We provide a theoretical analysis of this
property for the linear case, and show that, in practice, our method leads to
state of the art implicit neural representations with higher level-of-details
and fidelity compared to previous methods.
@misc{gropp2020implicit,
abstract = {Representing shapes as level sets of neural networks has been recently proved
to be useful for different shape analysis and reconstruction tasks. So far,
such representations were computed using either: (i) pre-computed implicit
shape representations; or (ii) loss functions explicitly defined over the
neural level sets. In this paper we offer a new paradigm for computing high
fidelity implicit neural representations directly from raw data (i.e., point
clouds, with or without normal information). We observe that a rather simple
loss function, encouraging the neural network to vanish on the input point
cloud and to have a unit norm gradient, possesses an implicit geometric
regularization property that favors smooth and natural zero level set surfaces,
avoiding bad zero-loss solutions. We provide a theoretical analysis of this
property for the linear case, and show that, in practice, our method leads to
state of the art implicit neural representations with higher level-of-details
and fidelity compared to previous methods.},
added-at = {2022-09-05T10:49:25.000+0200},
author = {Gropp, Amos and Yariv, Lior and Haim, Niv and Atzmon, Matan and Lipman, Yaron},
biburl = {https://www.bibsonomy.org/bibtex/2221c36150574b821692fd9a232cfd09e/m_gabriel},
description = {Implicit Geometric Regularization for Learning Shapes},
interhash = {dcaa8bf33001a50f2513805012a0db4b},
intrahash = {221c36150574b821692fd9a232cfd09e},
keywords = {nerf},
note = {cite arxiv:2002.10099Comment: 37th International Conference on Machine Learning, Vienna, Austria, 2020},
timestamp = {2022-09-05T10:49:25.000+0200},
title = {Implicit Geometric Regularization for Learning Shapes},
url = {http://arxiv.org/abs/2002.10099},
year = 2020
}