Explore the latest research from Facebook

All Publications

January 5, 2021 Nirbhay Modhe, Prithvijit Chattopadhyay, Mohit Sharma, Abhishek Das, Devi Parikh, Dhruv Batra, Ramakrishna Vedantam
Paper

IR-VIC: Unsupervised Discovery of Sub-goals for Transfer in RL

We propose a novel framework to identify subgoals useful for exploration in sequential decision making tasks under partial observability.
Paper
January 1, 2021 Mahmoud Assran, Michael Rabbat
Paper

Asynchronous Gradient-Push

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents.
Paper
December 8, 2020 Zhenpeng Zhou, Ahmad Beirami, Paul A. Crook, Pararth Shah, Rajen Subba, Alborz Geramifard
Paper

Resource Constrained Dialog Policy Learning via Differentiable Inductive Logic Programming

Motivated by the needs of resource constrained dialog policy learning, we introduce dialog policy via differentiable inductive logic (DILOG). We explore the tasks of one-shot learning and zero-shot domain transfer with DILOG on SimDial and MultiWoZ.
Paper
December 8, 2020 Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, Michael White
Paper

Best Practices for Data-Efficient Modeling in NLG: How to Train Production-Ready Neural Models with Less Data

In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each.
Paper
December 7, 2020 Edward J. Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal
Paper

3D Shape Reconstruction from Vision and Touch

When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. At any instance here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch information.
Paper
December 7, 2020 Maximilian Nickel, Brian Karrer, Daniel Jiang, Sam Daulton, Ben Letham, Andrew Gordon Wilson, Eytan Bakshy
Paper

BOTORCH: A Framework for Efficient Monte-Carlo Bayesian Optimization

We introduce BOTORCH, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques.
Paper
December 7, 2020 Shali Jiang, Daniel Jiang, Max Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett
Paper

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.
Paper
December 6, 2020 Emile Mathieu, Maximilian Nickel
Paper

Riemannian Continuous Normalizing Flows

We introduce Riemannian continuous normalizing flows, a model which admits the parametrization of flexible probability measures on smooth manifolds by defining flows as the solution to ordinary differential equations.
Paper
December 6, 2020 Yann Dubois, Douwe Kiela, David J. Schwab, Ramakrishna Vedantam
Paper

Learning Optimal Representations with the Decodable Information Bottleneck

We propose the Decodable Information Bottleneck (DIB) that considers information retention and compression from the perspective of the desired predictive family. As a result, DIB gives rise to representations that are optimal in terms of expected test performance and can be estimated with guarantees.
Paper
December 6, 2020 Xian Li, Asa Cooper Stickland, Yuqing Tang, Xiang Kong
Paper

Deep Transformers with Latent Depth

The Transformer model has achieved state-of-the-art performance in many sequence modeling tasks. However, how to leverage model capacity with large or variable depths is still an open challenge. We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection.
Paper