Recently, directly detecting 3D objects from 3D point clouds has received
increasing attention. To extract object representation from an irregular point
cloud, existing methods usually take a point grouping step to assign the points
to an object candidate so that a PointNet-like network could be used to derive
object features from the grouped points. However, the inaccurate point
assignments caused by the hand-crafted grouping scheme decrease the performance
of 3D object detection.
In this paper, we present a simple yet effective method for directly
detecting 3D objects from the 3D point cloud. Instead of grouping local points
to each object candidate, our method computes the feature of an object from all
the points in the point cloud with the help of an attention mechanism in the
Transformers vaswani2017attention, where the contribution of each point
is automatically learned in the network training. With an improved attention
stacking scheme, our method fuses object features in different stages and
generates more accurate object detection results. With few bells and whistles,
the proposed method achieves state-of-the-art 3D object detection performance
on two widely used benchmarks, ScanNet V2 and SUN RGB-D. The code and models
are publicly available at https://github.com/zeliu98/Group-Free-3D
Description
[2104.00678] Group-Free 3D Object Detection via Transformers
%0 Generic
%1 liu2021groupfree
%A Liu, Ze
%A Zhang, Zheng
%A Cao, Yue
%A Hu, Han
%A Tong, Xin
%D 2021
%K 2021 3D detection point-cloud transformer
%T Group-Free 3D Object Detection via Transformers
%U http://arxiv.org/abs/2104.00678
%X Recently, directly detecting 3D objects from 3D point clouds has received
increasing attention. To extract object representation from an irregular point
cloud, existing methods usually take a point grouping step to assign the points
to an object candidate so that a PointNet-like network could be used to derive
object features from the grouped points. However, the inaccurate point
assignments caused by the hand-crafted grouping scheme decrease the performance
of 3D object detection.
In this paper, we present a simple yet effective method for directly
detecting 3D objects from the 3D point cloud. Instead of grouping local points
to each object candidate, our method computes the feature of an object from all
the points in the point cloud with the help of an attention mechanism in the
Transformers vaswani2017attention, where the contribution of each point
is automatically learned in the network training. With an improved attention
stacking scheme, our method fuses object features in different stages and
generates more accurate object detection results. With few bells and whistles,
the proposed method achieves state-of-the-art 3D object detection performance
on two widely used benchmarks, ScanNet V2 and SUN RGB-D. The code and models
are publicly available at https://github.com/zeliu98/Group-Free-3D
@misc{liu2021groupfree,
abstract = {Recently, directly detecting 3D objects from 3D point clouds has received
increasing attention. To extract object representation from an irregular point
cloud, existing methods usually take a point grouping step to assign the points
to an object candidate so that a PointNet-like network could be used to derive
object features from the grouped points. However, the inaccurate point
assignments caused by the hand-crafted grouping scheme decrease the performance
of 3D object detection.
In this paper, we present a simple yet effective method for directly
detecting 3D objects from the 3D point cloud. Instead of grouping local points
to each object candidate, our method computes the feature of an object from all
the points in the point cloud with the help of an attention mechanism in the
Transformers \cite{vaswani2017attention}, where the contribution of each point
is automatically learned in the network training. With an improved attention
stacking scheme, our method fuses object features in different stages and
generates more accurate object detection results. With few bells and whistles,
the proposed method achieves state-of-the-art 3D object detection performance
on two widely used benchmarks, ScanNet V2 and SUN RGB-D. The code and models
are publicly available at \url{https://github.com/zeliu98/Group-Free-3D}},
added-at = {2021-04-05T07:04:42.000+0200},
author = {Liu, Ze and Zhang, Zheng and Cao, Yue and Hu, Han and Tong, Xin},
biburl = {https://www.bibsonomy.org/bibtex/2c09d68574b8b3648a1a1a2f0228ed582/analyst},
description = {[2104.00678] Group-Free 3D Object Detection via Transformers},
interhash = {2662faac95e082a8699c3c9ec09c711e},
intrahash = {c09d68574b8b3648a1a1a2f0228ed582},
keywords = {2021 3D detection point-cloud transformer},
note = {cite arxiv:2104.00678},
timestamp = {2021-04-05T07:04:42.000+0200},
title = {Group-Free 3D Object Detection via Transformers},
url = {http://arxiv.org/abs/2104.00678},
year = 2021
}