Explore the latest research from Facebook

All Publications

December 14, 2021 Donglai Xiang, Fabián Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu
Paper

Modeling Clothing as a Separate Layer for an Animatable Human Avatar

We then train a new two-layer codec avatar with separate modeling of the upper clothing and the inner body layer. To learn the interaction between the body dynamics and clothing states, we use a temporal convolution network to predict the clothing latent code based on a sequence of input skeletal poses. We show photorealistic animation output for three different actors, and demonstrate the advantage of our clothed-body avatars over the single-layer avatars used in previous work.
Paper
December 13, 2021 Yangyang Xia, Buye Xu, Anurag Kumar
Paper

Incorporating Real-world Noisy Speech in Neural-network-based Speech Enhancement Systems

In this paper, we explore methods that enable supervised speech enhancement systems to train on real-world degraded speech data. Specifically, we propose a semi-supervised approach for speech enhancement in which we first train a modified vector-quantized variational autoencoder that solves a source separation task.
Paper
November 9, 2021 Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, Dieuwke Hupkes
Paper

Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network

Here, in line with that tradition, we explore how recurrent neural networks acquire the complex German plural system and reflect upon how their strategy compares to human generalisation and rule-based models of this system.
Paper
November 6, 2021 Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, Arpit Mittal
Paper

FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information

In this paper we introduce a novel dataset and benchmark, Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS), which consists of 87,026 verified claims.
Paper
October 31, 2021 Pedro Rodriguez, Jordan Boyd-Graber
Paper

Evaluation Paradigms in Question Answering

This position paper names and distinguishes these paradigms. Despite substantial overlap, subtle but significant distinctions exert an outsize influence on research. While one evaluation paradigm values creating more intelligent QA systems, the other paradigm values building QA systems that appeal to users.
Paper
October 11, 2021 Bo Xiong, Haoqi Fan, Kristen Grauman, Christoph Feichtenhofer
Paper

Multiview Pseudo-Labeling for Semi-supervised Learning from Video

Though our method capitalizes on multiple views, it nonetheless trains a model that is shared across appearance and motion input and thus, by design, incurs no additional computation overhead at inference time.
Paper
October 11, 2021 Omid Poursaeed, Tianxing Jiang, Harry Yang, Serge Belongie, Ser-Nam Lim
Paper

Robustness and Generalization via Generative Adversarial Training

In this paper we present Generative Adversarial Training, an approach to simultaneously improve the model’s generalization to the test set and out-of-domain samples as well as its robustness to unseen adversarial attacks.
Paper
October 11, 2021 Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, Harsh Agrawal
Paper

Contrast and Classify: Training Robust VQA Models

We propose a novel training paradigm (ConClaT) that optimizes both cross-entropy and contrastive losses. The contrastive loss encourages representations to be robust to linguistic variations in questions while the cross-entropy loss preserves the discriminative power of representations for answer prediction.
Paper
October 10, 2021 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze
Paper

LeViT: a Vision Transformer in ConvNet’s Clothing for Faster Inference

We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware.
Paper
October 10, 2021 Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin
Paper

Emerging Properties in Self-Supervised Vision Transformers

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) [16] that stand out compared to convolutional networks (convnets).
Paper