Explore the latest research from Facebook
Filter by
Research Area
- All
- Academic Programs
- AR/VR
- Artificial Intelligence
- Blockchain & Cryptoeconomics
- Computational Photography & Intelligent Cameras
- Computer Vision
- Data Science
- Databases
- Economics & Computation
- Human Computer Interaction & UX
- Machine Learning
- Natural Language Processing & Speech
- Networking & Connectivity
- Security & Privacy
- Systems & Infrastructure
All Publications
High-sensitivity multispeckle diffuse correlation spectroscopy
Cerebral blood flow is an important biomarker of brain health and function as it regulates the delivery of oxygen and substrates to tissue and the removal of metabolic waste products. Moreover, blood flow changes in specific areas of the brain are correlated with neuronal activity in those areas. Diffuse correlation spectroscopy (DCS) is a promising noninvasive optical technique for monitoring cerebral blood flow and for measuring cortex functional activation tasks. However, the current state-of-the-art DCS adoption is hindered by a trade-off between sensitivity to the cortex and signal-to-noise ratio (SNR).
Paper
Burst Denoising via Temporally Shifted Wavelet Transforms
We propose an end-to-end trainable burst denoising pipeline which jointly captures high-resolution and high-frequency deep features derived from wavelet transforms. In our model, precious local details are preserved in high-frequency sub-band features to enhance the final perceptual quality, while the low-frequency sub-band features carry structural information for faithful reconstruction and final objective quality.
Paper
One Shot 3D Photography
3D photography is a new medium that allows viewers to more fully experience a captured moment. In this work, we refer to a 3D photo as one that displays parallax induced by moving the viewpoint (as opposed to a stereo pair with a fixed viewpoint). 3D photos are static in time, like traditional photos, but are displayed with interactive parallax on mobile or desktop screens, as well as on Virtual Reality devices, where viewing it also includes stereo. We present an end-to-end system for creating and viewing 3D photos, and the algorithmic and design choices therein.
Paper
Consistent Video Depth Estimation
We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video. We leverage a conventional structure-from-motion reconstruction to establish geometric constraints on pixels in the video.
Paper
Synthetic Defocus and Look-Ahead Autofocus for Casual Videography
In cinema, large camera lenses create beautiful shallow depth of field (DOF), but make focusing difficult and expensive. Accurate cinema focus usually relies on a script and a person to control focus in realtime. Casual videographers often crave cinematic focus, but fail to achieve it. We either sacrifice shallow DOF, as in smartphone videos; or we struggle to deliver accurate focus, as in videos from larger cameras. This paper is about a new approach in the pursuit of cinematic focus for casual videography.
Paper
VPLNet: Deep Single View Normal Estimation with Vanishing Points and Lines
We present a novel single-view surface normal estimation method that combines traditional line and vanishing point analysis with a deep learning approach.
Paper
Fast Depth Densification for Occlusion-aware Augmented Reality
Current AR systems only track sparse geometric features but do not compute depth for all pixels. For this reason, most AR effects are pure overlays that can never be occluded by real objects. We present a novel algorithm that propagates sparse depth to every pixel in near realtime.
Paper
Value-aware Quantization for Training and Inference of Neural Networks
We propose a novel value-aware quantization which applies aggressively reduced precision to the majority of data while separately handling a small amount of large values in high precision, which reduces total quantization errors under very low precision.
Paper
Instant 3D Photography
We present an algorithm for constructing 3D panoramas from a sequence of aligned color-and-depth image pairs. Such sequences can be conveniently captured using dual lens cell phone cameras that reconstruct depth maps from synchronized stereo image capture.
Paper
DeepMVS: Learning Multi-view Stereopsis
We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction.
Paper