DeepMVS: Learning Multi-view Stereopsis

Computer Vision and Pattern Recognition (CVPR)


We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of plane-sweep volumes and use the proposed DeepMVS network to predict high-quality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multi-layer feature activations from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures.

Related Publications

All Publications

EACL - April 20, 2021

FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

Terra Blevins, Mandar Joshi, Luke Zettlemoyer

The Springer Series on Challenges in Machine Learning - December 12, 2019

The Second Conversational Intelligence Challenge (ConvAI2)

Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Jason Weston

ICLR - May 4, 2021

Combining Label Propagation and Simple Models Out-performs Graph Neural Networks

Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson

ICLR - May 3, 2021

Creative Sketch Generation

Songwei Ge, Vedanuj Goswami, Larry Zitnick, Devi Parikh

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy