Point clouds provide a flexible and scalable geometric representation
suitable for countless applications in computer graphics; they also comprise
the raw output of most 3D data acquisition devices. Hence, the design of
intelligent computational models that act directly on point clouds is critical,
especially when efficiency considerations or noise preclude the possibility of
expensive denoising and meshing procedures. While hand-designed features on
point clouds have long been proposed in graphics and vision, however, the
recent overwhelming success of convolutional neural networks (CNNs) for image
analysis suggests the value of adapting insight from CNN to the point cloud
world. To this end, we propose a new neural network module dubbed EdgeConv
suitable for CNN-based high-level tasks on point clouds including
classification and segmentation. EdgeConv is differentiable and can be plugged
into existing architectures. Compared to existing modules operating largely in
extrinsic space or treating each point independently, EdgeConv has several
appealing properties: It incorporates local neighborhood information; it can be
stacked or recurrently applied to learn global shape properties; and in
multi-layer systems affinity in feature space captures semantic characteristics
over potentially long distances in the original embedding. Beyond proposing
this module, we provide extensive evaluation and analysis revealing that
EdgeConv captures and exploits fine-grained geometric properties of point
clouds. The proposed approach achieves state-of-the-art performance on standard
benchmarks including ModelNet40 and S3DIS.
%0 Generic
%1 wang2018dynamic
%A Wang, Yue
%A Sun, Yongbin
%A Liu, Ziwei
%A Sarma, Sanjay E.
%A Bronstein, Michael M.
%A Solomon, Justin M.
%D 2018
%K CNN graph point_cloud
%T Dynamic Graph CNN for Learning on Point Clouds
%U http://arxiv.org/abs/1801.07829
%X Point clouds provide a flexible and scalable geometric representation
suitable for countless applications in computer graphics; they also comprise
the raw output of most 3D data acquisition devices. Hence, the design of
intelligent computational models that act directly on point clouds is critical,
especially when efficiency considerations or noise preclude the possibility of
expensive denoising and meshing procedures. While hand-designed features on
point clouds have long been proposed in graphics and vision, however, the
recent overwhelming success of convolutional neural networks (CNNs) for image
analysis suggests the value of adapting insight from CNN to the point cloud
world. To this end, we propose a new neural network module dubbed EdgeConv
suitable for CNN-based high-level tasks on point clouds including
classification and segmentation. EdgeConv is differentiable and can be plugged
into existing architectures. Compared to existing modules operating largely in
extrinsic space or treating each point independently, EdgeConv has several
appealing properties: It incorporates local neighborhood information; it can be
stacked or recurrently applied to learn global shape properties; and in
multi-layer systems affinity in feature space captures semantic characteristics
over potentially long distances in the original embedding. Beyond proposing
this module, we provide extensive evaluation and analysis revealing that
EdgeConv captures and exploits fine-grained geometric properties of point
clouds. The proposed approach achieves state-of-the-art performance on standard
benchmarks including ModelNet40 and S3DIS.
@misc{wang2018dynamic,
abstract = {Point clouds provide a flexible and scalable geometric representation
suitable for countless applications in computer graphics; they also comprise
the raw output of most 3D data acquisition devices. Hence, the design of
intelligent computational models that act directly on point clouds is critical,
especially when efficiency considerations or noise preclude the possibility of
expensive denoising and meshing procedures. While hand-designed features on
point clouds have long been proposed in graphics and vision, however, the
recent overwhelming success of convolutional neural networks (CNNs) for image
analysis suggests the value of adapting insight from CNN to the point cloud
world. To this end, we propose a new neural network module dubbed EdgeConv
suitable for CNN-based high-level tasks on point clouds including
classification and segmentation. EdgeConv is differentiable and can be plugged
into existing architectures. Compared to existing modules operating largely in
extrinsic space or treating each point independently, EdgeConv has several
appealing properties: It incorporates local neighborhood information; it can be
stacked or recurrently applied to learn global shape properties; and in
multi-layer systems affinity in feature space captures semantic characteristics
over potentially long distances in the original embedding. Beyond proposing
this module, we provide extensive evaluation and analysis revealing that
EdgeConv captures and exploits fine-grained geometric properties of point
clouds. The proposed approach achieves state-of-the-art performance on standard
benchmarks including ModelNet40 and S3DIS.},
added-at = {2018-02-10T13:42:21.000+0100},
author = {Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.},
biburl = {https://www.bibsonomy.org/bibtex/2583c051f50803a30a60cb50532fc9619/jk_itwm},
description = {1801.07829.pdf},
interhash = {ff46d0c7d6e76b2bb7c42139db38cf75},
intrahash = {583c051f50803a30a60cb50532fc9619},
keywords = {CNN graph point_cloud},
note = {cite arxiv:1801.07829},
timestamp = {2018-02-10T13:42:21.000+0100},
title = {Dynamic Graph CNN for Learning on Point Clouds},
url = {http://arxiv.org/abs/1801.07829},
year = 2018
}