Explore the latest research from Facebook

All Publications

December 7, 2020 Terrance DeVries, Michal Drozdzal, Graham Taylor
Paper

Instance Selection for GANs

In this work we propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place. By refining the empirical data distribution before training, we redirect model capacity towards high-density regions, which ultimately improves sample fidelity, lowers model capacity requirements, and significantly reduces training time.
Paper
December 7, 2020 Edward J. Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal
Paper

3D Shape Reconstruction from Vision and Touch

When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. At any instance here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch information.
Paper
December 4, 2020 Senthil Purushwalkam, Abhinav Gupta
Paper

Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases

Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification. Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it’s augmented versions as samples of a single class. In this work, we first present quantitative experiments to demystify these gains.
Paper
December 1, 2020 Natalia Neverova, David Novotny, Vasil Khalidov, Marc Szafraniec, Patrick Labatut, Andrea Vedaldi
Paper

Continuous Surface Embeddings

In this work, we focus on the task of learning and representing dense correspondences in deformable object categories. While this problem has been considered before, solutions so far have been rather ad-hoc for specific object types (i.e., humans), often with significant manual work involved.
Paper
December 1, 2020 Breannan Smith, Chenglei Wu, He Wen, Patrick Peluse, Yaser Sheikh, Jessica Hodgins, Takaaki Shiratori
Paper

Constraining Dense Hand Surface Tracking with Elasticity

By extending recent advances in vision-based tracking and physically based animation, we present the first algorithm capable of tracking high-fidelity hand deformations through highly self-contacting and self-occluding hand gestures, for both single hands and two hands.
Paper
November 25, 2020 Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins
Paper

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input. In contrast to the existing literature, our method does not require a pre-scanned personalized mesh template, and thus can be applied to in-the-wild videos.
Paper
November 9, 2020 Tushar Nagarajan, Kristen Grauman
Paper

Learning Affordance Landscapes for Interaction Exploration in 3D Environments

Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen).
Paper
November 9, 2020 Chao Yang, Ser Nam Lim
Paper

One-Shot Domain Adaptation For Face Generation

In this paper, we propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example. We leverage a pre-trained StyleGAN model that already learned the generic face distribution.
Paper
September 7, 2020 Sungyong Baik, Hyo Jin Kim, Tianwei Shen, Eddy Ilg, Kyoung Mu Lee, Chris Sweeney
Paper

Domain Adaptation of Learned Features for Visual Localization

We tackle the problem of visual localization under changing conditions, such as time of day, weather, and seasons. Recent learned local features based on deep neural networks have shown superior performance over classical hand-crafted local features. However, in a real-world scenario, there often exists a large domain gap between training and target images, which can significantly degrade the localization accuracy.
Paper
September 7, 2020 Gunjan Aggarwal, Devi Parikh
Paper

Neuro-Symbolic Generative Art: A Preliminary Study

As a preliminary study, we train a generative deep neural network on samples from the symbolic approach. We demonstrate through human studies that subjects find the final artifacts and the creation process using our neurosymbolic approach to be more creative than the symbolic approach 61% and 82% of the time respectively.
Paper