Explore the latest research from Facebook
Filter by
Research Area
- All
- Academic Programs
- AR/VR
- Artificial Intelligence
- Blockchain & Cryptoeconomics
- Computational Photography & Intelligent Cameras
- Computer Vision
- Data Science
- Databases
- Economics & Computation
- Human Computer Interaction & UX
- Machine Learning
- Natural Language Processing & Speech
- Networking & Connectivity
- Security & Privacy
- Systems & Infrastructure
All Publications
IR-VIC: Unsupervised Discovery of Sub-goals for Transfer in RL
We propose a novel framework to identify subgoals useful for exploration in sequential decision making tasks under partial observability.
Paper
Asynchronous Gradient-Push
We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents.
Paper
Stability of Decentralized Gradient Descent in Open Multi-Agent Systems
The aim of decentralized gradient descent (DGD) is to minimize a sum of n functions held by interconnected agents. We study the stability of DGD in open contexts where agents can join or leave the system, resulting each time in the addition or the removal of their function from the global objective.
Paper
Resource Constrained Dialog Policy Learning via Differentiable Inductive Logic Programming
Motivated by the needs of resource constrained dialog policy learning, we introduce dialog policy via differentiable inductive logic (DILOG). We explore the tasks of one-shot learning and zero-shot domain transfer with DILOG on SimDial and MultiWoZ.
Paper
Best Practices for Data-Efficient Modeling in NLG: How to Train Production-Ready Neural Models with Less Data
In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each.
Paper
3D Shape Reconstruction from Vision and Touch
When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. At any instance here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch information.
Paper
BOTORCH: A Framework for Efficient Monte-Carlo Bayesian Optimization
We introduce BOTORCH, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques.
Paper
Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees
In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.
Paper
Riemannian Continuous Normalizing Flows
We introduce Riemannian continuous normalizing flows, a model which admits the parametrization of flexible probability measures on smooth manifolds by defining flows as the solution to ordinary differential equations.
Paper
Learning Optimal Representations with the Decodable Information Bottleneck
We propose the Decodable Information Bottleneck (DIB) that considers information retention and compression from the perspective of the desired predictive family. As a result, DIB gives rise to representations that are optimal in terms of expected test performance and can be estimated with guarantees.
Paper