We present a new deep point cloud rendering pipeline through multi-plane
projections. The input to the network is the raw point cloud of a scene and the
output are image or image sequences from a novel view or along a novel camera
trajectory. Unlike previous approaches that directly project features from 3D
points onto 2D image domain, we propose to project these features into a
layered volume of camera frustum. In this way, the visibility of 3D points can
be automatically learnt by the network, such that ghosting effects due to false
visibility check as well as occlusions caused by noise interferences are both
avoided successfully. Next, the 3D feature volume is fed into a 3D CNN to
produce multiple layers of images w.r.t. the space division in the depth
directions. The layered images are then blended based on learned weights to
produce the final rendering results. Experiments show that our network produces
more stable renderings compared to previous methods, especially near the object
boundaries. Moreover, our pipeline is robust to noisy and relatively sparse
point cloud for a variety of challenging scenes.
Описание
[1912.04645v1] Neural Point Cloud Rendering via Multi-Plane Projection
%0 Generic
%1 dai2019neural
%A Dai, Peng
%A Zhang, Yinda
%A Li, Zhuwen
%A Liu, Shuaicheng
%A Zeng, Bing
%D 2019
%K 2019 graphics point-cloud
%T Neural Point Cloud Rendering via Multi-Plane Projection
%U http://arxiv.org/abs/1912.04645
%X We present a new deep point cloud rendering pipeline through multi-plane
projections. The input to the network is the raw point cloud of a scene and the
output are image or image sequences from a novel view or along a novel camera
trajectory. Unlike previous approaches that directly project features from 3D
points onto 2D image domain, we propose to project these features into a
layered volume of camera frustum. In this way, the visibility of 3D points can
be automatically learnt by the network, such that ghosting effects due to false
visibility check as well as occlusions caused by noise interferences are both
avoided successfully. Next, the 3D feature volume is fed into a 3D CNN to
produce multiple layers of images w.r.t. the space division in the depth
directions. The layered images are then blended based on learned weights to
produce the final rendering results. Experiments show that our network produces
more stable renderings compared to previous methods, especially near the object
boundaries. Moreover, our pipeline is robust to noisy and relatively sparse
point cloud for a variety of challenging scenes.
@misc{dai2019neural,
abstract = {We present a new deep point cloud rendering pipeline through multi-plane
projections. The input to the network is the raw point cloud of a scene and the
output are image or image sequences from a novel view or along a novel camera
trajectory. Unlike previous approaches that directly project features from 3D
points onto 2D image domain, we propose to project these features into a
layered volume of camera frustum. In this way, the visibility of 3D points can
be automatically learnt by the network, such that ghosting effects due to false
visibility check as well as occlusions caused by noise interferences are both
avoided successfully. Next, the 3D feature volume is fed into a 3D CNN to
produce multiple layers of images w.r.t. the space division in the depth
directions. The layered images are then blended based on learned weights to
produce the final rendering results. Experiments show that our network produces
more stable renderings compared to previous methods, especially near the object
boundaries. Moreover, our pipeline is robust to noisy and relatively sparse
point cloud for a variety of challenging scenes.},
added-at = {2019-12-11T15:28:00.000+0100},
author = {Dai, Peng and Zhang, Yinda and Li, Zhuwen and Liu, Shuaicheng and Zeng, Bing},
biburl = {https://www.bibsonomy.org/bibtex/204172c907f5bfbff175d8f00e244ad78/analyst},
description = {[1912.04645v1] Neural Point Cloud Rendering via Multi-Plane Projection},
interhash = {fd2e303c8269c7d3178468305f5ac802},
intrahash = {04172c907f5bfbff175d8f00e244ad78},
keywords = {2019 graphics point-cloud},
note = {cite arxiv:1912.04645Comment: 17 pages},
timestamp = {2019-12-11T15:28:00.000+0100},
title = {Neural Point Cloud Rendering via Multi-Plane Projection},
url = {http://arxiv.org/abs/1912.04645},
year = 2019
}