Explore the latest research from Facebook

All Publications

October 20, 2021 Sachin Mehta, Amit Kumar, Fitsum Reda, Varun Nasery, Vikram Mulukutla, Rakesh Ranjan, Vikas Chandra
Paper

EVRNet: Efficient Video Restoration on Edge Devices

In video transmission applications, video signals are transmitted over lossy channels, resulting in low-quality received signals. To restore videos on recipient edge devices in real-time, we introduce an efficient video restoration network, EVRNet.
Paper
October 4, 2021 Kai-En Lin, Lei Xiao, Feng Liu, Guowei Yang, Ravi Ramamoorthi
Paper

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes

We develop a new algorithm, Deep 3D Mask Volume, which enables temporally stable view extrapolation from binocular videos of dynamic scenes, captured by static cameras.
Paper
August 9, 2021 Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn Mcphail, Ravi Ramamoorthi, Yaser Sheikh, Jason Saragih
Paper

Deep Relightable Appearance Models for Animatable Faces

We present a method for building high-fidelity animatable 3D face models that can be posed and rendered with novel lighting environments in real-time.
Paper
June 19, 2021 Wenqi Xian, Jia-Bin Huang, Johannes Kopf, Changil Kim
Paper

Space-time Neural Irradiance Fields for Free-Viewpoint Video

We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video. Our learned representation enables free-viewpoint rendering of the input video. Our method builds upon recent advances in implicit representations. Learning a spatiotemporal irradiance field from a single video poses significant challenges because the video contains only one observation of the scene at any point in time.
Paper
September 26, 2020 Edbert J. Sie, Hui Chen, E-Fann Saung, Ryan Catoen, Tobias Tiecke, Mark A. Chevillet, Francesco Marsili
Paper

High-sensitivity multispeckle diffuse correlation spectroscopy

Cerebral blood flow is an important biomarker of brain health and function as it regulates the delivery of oxygen and substrates to tissue and the removal of metabolic waste products. Moreover, blood flow changes in specific areas of the brain are correlated with neuronal activity in those areas. Diffuse correlation spectroscopy (DCS) is a promising noninvasive optical technique for monitoring cerebral blood flow and for measuring cortex functional activation tasks. However, the current state-of-the-art DCS adoption is hindered by a trade-off between sensitivity to the cortex and signal-to-noise ratio (SNR).
Paper
August 23, 2020 Xuejian Rong, Denis Demandolx, Kevin Matzen, Priyam Chatterjee, Yingli Tian
Paper

Burst Denoising via Temporally Shifted Wavelet Transforms

We propose an end-to-end trainable burst denoising pipeline which jointly captures high-resolution and high-frequency deep features derived from wavelet transforms. In our model, precious local details are preserved in high-frequency sub-band features to enhance the final perceptual quality, while the low-frequency sub-band features carry structural information for faithful reconstruction and final objective quality.
Paper
August 23, 2020 Rohan Chabra, Jan E. Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, Richard Newcombe
Paper

Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction

Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF.
Paper
August 17, 2020 Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu, Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, Michael Cohen
Paper

One Shot 3D Photography

3D photography is a new medium that allows viewers to more fully experience a captured moment. In this work, we refer to a 3D photo as one that displays parallax induced by moving the viewpoint (as opposed to a stereo pair with a fixed viewpoint). 3D photos are static in time, like traditional photos, but are displayed with interactive parallax on mobile or desktop screens, as well as on Virtual Reality devices, where viewing it also includes stereo. We present an end-to-end system for creating and viewing 3D photos, and the algorithmic and design choices therein.
Paper
August 17, 2020 Xuan Luo, Jia-Bin Huang, Richard Szeliski, Kevin Matzen, Johannes Kopf
Paper

Consistent Video Depth Estimation

We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video. We leverage a conventional structure-from-motion reconstruction to establish geometric constraints on pixels in the video.
Paper
July 28, 2020 Xuaner Zhang, Kevin Matzen, Vivien Nguyen, Dillon Yao, You Zhang, Ren Ng
Paper

Synthetic Defocus and Look-Ahead Autofocus for Casual Videography

In cinema, large camera lenses create beautiful shallow depth of field (DOF), but make focusing difficult and expensive. Accurate cinema focus usually relies on a script and a person to control focus in realtime. Casual videographers often crave cinematic focus, but fail to achieve it. We either sacrifice shallow DOF, as in smartphone videos; or we struggle to deliver accurate focus, as in videos from larger cameras. This paper is about a new approach in the pursuit of cinematic focus for casual videography.
Paper