Explore the latest research from Facebook

All Publications

August 9, 2021 Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn Mcphail, Ravi Ramamoorthi, Yaser Sheikh, Jason Saragih
Paper

Deep Relightable Appearance Models for Animatable Faces

We present a method for building high-fidelity animatable 3D face models that can be posed and rendered with novel lighting environments in real-time.
Paper
August 9, 2021 Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih
Paper

Mixture of Volumetric Primitives for Efficient Neural Rendering

We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. Our approach achieves this by leveraging spatially shared computation with a convolutional architecture and by minimizing computation in empty regions of space with volumetric primitives that can move to cover only occupied regions.
Paper
August 9, 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabián Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih
Paper

Driving-Signal Aware Full-Body Avatars

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.
Paper
July 18, 2021 Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, Vikas Chandra
Paper

AlphaNet: Improved Training of Supernets with Alpha-Divergence

In this work, we propose to improve the supernet training with a more generalized α-divergence. By adaptively selecting the α-divergence, we simultaneously prevent the over-estimation or under-estimation of the uncertainty of the teacher model.
Paper
July 18, 2021 Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, Stéphane Deny
Paper

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

We propose an objective function that naturally avoids collapse by measuring the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and making it as close to the identity matrix as possible. This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
Paper
June 29, 2021 Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, Ravi Ramamoorthi
Paper

Human Hair Inverse Rendering using Multi-View Photometric data

We show that our method can faithfully reproduce the appearance of human hair and provide realism for digital humans. We demonstrate the accuracy and efficiency of our method using photorealistic synthetic hair rendering data.
Paper
June 28, 2021 Donglai Xiang, Fabián Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu
Paper

Explicit Clothing Modeling for an Animatable Full-Body Avatar

Recent work has shown great progress in building photorealistic animatable full-body codec avatars, but these avatars still face difficulties in generating high-fidelity animation of clothing. To address the difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.
Paper
June 25, 2021 Xinlei Chen, Kaiming He
Paper

Exploring Simple Siamese Representation Learning

In this paper, we report surprising empirical results that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.
Paper
June 24, 2021 Zhongzheng Ren, Ishan Misra, Alexander Schwing, Rohit Girdhar
Paper

3D Spatial Recognition without Spatially Labeled 3D

We introduce WyPR, a Weakly-supervised framework for Point cloud Recognition, requiring only scene-level class tags as supervision. WyPR jointly addresses three core 3D recognition tasks: point-level semantic segmentation, 3D proposal generation, and 3D object detection, coupling their predictions through self and cross-task consistency losses.
Paper
June 23, 2021 Yufei Ye, Shubham Tulsiani, Abhinav Gupta
Paper

Shelf-Supervised Mesh Prediction in the Wild

We aim to infer 3D shape and pose of object from a single image and propose a learning-based approach that can train from unstructured image collections, supervised by only segmentation outputs from off-the-shelf recognition systems (i.e. ‘shelf-supervised’).
Paper