P. Huang, K. Matzen, J. Kopf, N. Ahuja, and J. Huang. (2018)cite arxiv:1804.00650Comment: CVPR 2018. Project page: https://phuang17.github.io/DeepMVS/ Code: https://github.com/phuang17/DeepMVS.
Abstract
We present DeepMVS, a deep convolutional neural network (ConvNet) for
multi-view stereo reconstruction. Taking an arbitrary number of posed images as
input, we first produce a set of plane-sweep volumes and use the proposed
DeepMVS network to predict high-quality disparity maps. The key contributions
that enable these results are (1) supervised pretraining on a photorealistic
synthetic dataset, (2) an effective method for aggregating information across a
set of unordered images, and (3) integrating multi-layer feature activations
from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using
the ETH3D Benchmark. Our results show that DeepMVS compares favorably against
state-of-the-art conventional MVS algorithms and other ConvNet based methods,
particularly for near-textureless regions and thin structures.
%0 Generic
%1 huang2018deepmvs
%A Huang, Po-Han
%A Matzen, Kevin
%A Kopf, Johannes
%A Ahuja, Narendra
%A Huang, Jia-Bin
%D 2018
%K 2018 arxiv deep-learning multi-view paper reconstruction stereo
%T DeepMVS: Learning Multi-view Stereopsis
%U http://arxiv.org/abs/1804.00650
%X We present DeepMVS, a deep convolutional neural network (ConvNet) for
multi-view stereo reconstruction. Taking an arbitrary number of posed images as
input, we first produce a set of plane-sweep volumes and use the proposed
DeepMVS network to predict high-quality disparity maps. The key contributions
that enable these results are (1) supervised pretraining on a photorealistic
synthetic dataset, (2) an effective method for aggregating information across a
set of unordered images, and (3) integrating multi-layer feature activations
from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using
the ETH3D Benchmark. Our results show that DeepMVS compares favorably against
state-of-the-art conventional MVS algorithms and other ConvNet based methods,
particularly for near-textureless regions and thin structures.
@misc{huang2018deepmvs,
abstract = {We present DeepMVS, a deep convolutional neural network (ConvNet) for
multi-view stereo reconstruction. Taking an arbitrary number of posed images as
input, we first produce a set of plane-sweep volumes and use the proposed
DeepMVS network to predict high-quality disparity maps. The key contributions
that enable these results are (1) supervised pretraining on a photorealistic
synthetic dataset, (2) an effective method for aggregating information across a
set of unordered images, and (3) integrating multi-layer feature activations
from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using
the ETH3D Benchmark. Our results show that DeepMVS compares favorably against
state-of-the-art conventional MVS algorithms and other ConvNet based methods,
particularly for near-textureless regions and thin structures.},
added-at = {2018-04-05T17:52:30.000+0200},
author = {Huang, Po-Han and Matzen, Kevin and Kopf, Johannes and Ahuja, Narendra and Huang, Jia-Bin},
biburl = {https://www.bibsonomy.org/bibtex/2db3906402a7e1e678fced0678a03d296/achakraborty},
description = {[1804.00650] DeepMVS: Learning Multi-view Stereopsis},
interhash = {d2ec923ffbb15ad69132f4feac8ca556},
intrahash = {db3906402a7e1e678fced0678a03d296},
keywords = {2018 arxiv deep-learning multi-view paper reconstruction stereo},
note = {cite arxiv:1804.00650Comment: CVPR 2018. Project page: https://phuang17.github.io/DeepMVS/ Code: https://github.com/phuang17/DeepMVS},
timestamp = {2018-04-05T17:52:30.000+0200},
title = {DeepMVS: Learning Multi-view Stereopsis},
url = {http://arxiv.org/abs/1804.00650},
year = 2018
}