Explore the latest research from Facebook

All Publications

June 6, 2021 Panagiotis Tzirakis, Anurag Kumar, Jacob Donley
Paper

Multi-Channel Speech Enhancement Using Graph Neural Networks

In this paper, we introduce a different research direction by viewing each audio channel as a node lying in a non-Euclidean space and, specifically, a graph.
Paper
March 31, 2021 Christian Kroer, Nicolas E. Stier-Moses
Paper

Market Equilibrium Models in Large-Scale Internet Markets

We focus on Internet advertising auctions, fair division problems, content recommendation systems, and robust abstractions of large-scale markets.
Paper
March 5, 2021 John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Maria Lomeli, Erik Meijer, Silvia Sapora, Justin Spahr-Summers
Paper

Testing Web Enabled Simulation at Scale Using Metamorphic Testing

We report on Facebook’s deployment of MIA (Metamorphic Interaction Automaton). MIA is used to test Facebook’s Web Enabled Simulation, built on a web infrastructure of hundreds of millions of lines of code.
Paper
February 11, 2021 Carlos A. Gómez-Uribe, Brian Karrer
Paper

The Decoupled Extended Kalman Filter for Dynamic Exponential-Family Factorization Models

Motivated by the needs of online large-scale recommender systems, we specialize the decoupled extended Kalman filter (DEKF) to factorization models, including factorization machines, matrix and tensor factorization, and illustrate the effectiveness of the approach through numerical experiments on synthetic and on real-world data.
Paper
January 15, 2021 Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav S. Sukhatme, Franziska Meier
Paper

Meta Learning via Learned Loss

In this paper, we take the first step towards automating this process, with the view of producing models which train faster and more robustly. Concretely, we present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures.
Paper
December 16, 2020 Matteo Castiglioni, Andrea Celli, Alberto Marchesi, Nicola Gatti
Paper

Online Bayesian Persuasion

In Bayesian persuasion, an informed sender has to design a signaling scheme that discloses the right amount of information so as to influence the behavior of a self-interested receiver. This kind of strategic interaction is ubiquitous in real-world economic scenarios. However, the seminal model by Kamenica and Gentzkow makes some stringent assumptions that limit its applicability in practice.
Paper
December 15, 2020 Stéphane d'Ascoli, Levent Sagun, Giulio Biroli
Paper

Triple descent and the two kinds of overfitting: Where & why do they appear?

A recent line of research has highlighted the existence of a “double descent” phenomenon in deep learning, whereby increasing the number of training examples N causes the generalization error of neural networks to peak when N is of the same order as the number of parameters P. In earlier works, a similar phenomenon was shown to exist in simpler models such as linear regression, where the peak instead occurs when N is equal to the input dimension D. Since both peaks coincide with the interpolation threshold, they are often conflated in the literature. In this paper, we show that despite their apparent similarity, these two scenarios are inherently different.
Paper
December 8, 2020 Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, Michael White
Paper

Best Practices for Data-Efficient Modeling in NLG: How to Train Production-Ready Neural Models with Less Data

In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each.
Paper
December 7, 2020 Terrance DeVries, Michal Drozdzal, Graham Taylor
Paper

Instance Selection for GANs

In this work we propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place. By refining the empirical data distribution before training, we redirect model capacity towards high-density regions, which ultimately improves sample fidelity, lowers model capacity requirements, and significantly reduces training time.
Paper
December 7, 2020 Shali Jiang, Daniel Jiang, Max Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett
Paper

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.
Paper