Explore the latest research from Facebook

All Publications

June 19, 2021 Shugao Ma, Tomas Simon, Jason Saragih, Dawei Wang, Yuecheng Li, Fernando De la Torre, Yaser Sheikh
Paper

Pixel Codec Avatars

In this work, we present the Pixel Codec Avatars (PiCA): a deep generative model of 3D human faces that achieves state of the art reconstruction performance while being computationally efficient and adaptive to the rendering conditions during execution.
Paper
June 19, 2021 Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, Jason Saragih
Paper

SimPoE: Simulated Character Control for 3D Human Pose Estimation

Accurate estimation of 3D human motion from monocular video requires modeling both kinematics (body motion without physical forces) and dynamics (motion with physical forces).
Paper
June 6, 2021 Panagiotis Tzirakis, Anurag Kumar, Jacob Donley
Paper

Multi-Channel Speech Enhancement Using Graph Neural Networks

In this paper, we introduce a different research direction by viewing each audio channel as a node lying in a non-Euclidean space and, specifically, a graph.
Paper
June 1, 2021 Bindita Chaudhuri, Nikolaos Sarafianos, Linda Shapiro, Tony Tung
Paper

Semi-supervised Synthesis of High-Resolution Editable Textures for 3D Humans

We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi- supervised setup.
Paper
December 8, 2020 Seungwhan Moon, Satwik Kottur, Paul A. Crook, Ankita De, Shivani Poddar, Theodore Levin, David Whitney, Daniel Difranco, Ahmad Beirami, Eunjoon Cho, Rajen Subba, Alborz Geramifard
Paper

Situated and Interactive Multimodal Conversations

Next generation virtual assistants are envisioned to handle multimodal inputs (e.g., vision, memories of previous interactions, and the user’s utterances), and perform multimodal actions (e.g., displaying a route while generating the system’s utterance). We introduce Situated Interactive MultiModal Conversations (SIMMC) as a new direction aimed at training agents that take multimodal actions grounded in a co-evolving multimodal input context in addition to the dialog history.
Paper
December 1, 2020 Breannan Smith, Chenglei Wu, He Wen, Patrick Peluse, Yaser Sheikh, Jessica Hodgins, Takaaki Shiratori
Paper

Constraining Dense Hand Surface Tracking with Elasticity

By extending recent advances in vision-based tracking and physically based animation, we present the first algorithm capable of tracking high-fidelity hand deformations through highly self-contacting and self-occluding hand gestures, for both single hands and two hands.
Paper
November 25, 2020 Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins
Paper

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input. In contrast to the existing literature, our method does not require a pre-scanned personalized mesh template, and thus can be applied to in-the-wild videos.
Paper
November 23, 2020 Mateusz Machalica, Alex Samylkin, Meredith Porth, Satish Chandra
Paper

Predictive Test Selection

Change-based testing is a key component of continuous integration at Facebook. However, a large number of tests coupled with a high rate of changes committed to our monolithic repository make it infeasible to run all potentially impacted tests on each change. We propose a new predictive test selection strategy which selects a subset of tests to exercise for each change submitted to the continuous integration system.
Paper
November 16, 2020 Changwon Jang, Olivier Mercier, Kiseung Bang, Gang Li, Yang Zhao, Douglas Lanman
Paper

Design and Fabrication of Freeform Holographic Optical Elements

We propose an optimization method for grating vector fields that accounts for the unique selectivity properties of HOEs. We further show how our pipeline can be applied to two distinct HOE fabrication methods.
Paper
November 9, 2020 Aakar Gupta, Majed Samad, Kenrick Kin, Per Ola Kristensson, Hrvoje Benko
Paper

Investigating Remote Tactile Feedback for Mid-Air Text-Entry in Virtual Reality

In this paper, we investigate the utility of remote tactile feedback for freehand text-entry on a mid-air Qwerty keyboard in VR. To that end, we use insights from prior work to design a virtual keyboard along with different forms of tactile feedback, both spatial and non-spatial, for fingers and for wrists.
Paper