Explore the latest research from Facebook

All Publications

September 14, 2020 Hugh Leather, Chris Cummins
Paper

Machine Learning in Compilers: Past, Present, and Future

In this paper we will give a retrospective of machine learning in compiler optimisation from its earliest inception, through some of the works that set themselves apart, to today’s deep learning, finishing with our vision of the field’s future. Index Terms—machine learning, compilers.
Paper
September 7, 2020 X. Alice Li, Devi Parikh
Paper

Lemotif: An Affective Visual Journal Using Deep Neural Networks

We present Lemotif, an integrated natural language processing and image generation system that uses machine learning to (1) parse a text-based input journal entry describing the user’s day for salient themes and emotions and (2) visualize the detected themes and emotions in creative and appealing image motifs.
Paper
September 7, 2020 Devi Parikh
Paper

Predicting A Creator’s Preferences In, and From, Interactive Generative Art

We train machine learning models to predict a subset of preferences from the rest. We find that preferences in the generative art form cannot predict preferences in other walks of life better than chance (and vice versa). However, preferences within the generative art form are reliably predictive of each other.
Paper
September 7, 2020 Gunjan Aggarwal, Devi Parikh
Paper

Neuro-Symbolic Generative Art: A Preliminary Study

As a preliminary study, we train a generative deep neural network on samples from the symbolic approach. We demonstrate through human studies that subjects find the final artifacts and the creation process using our neurosymbolic approach to be more creative than the symbolic approach 61% and 82% of the time respectively.
Paper
September 3, 2020 Maxim Naumov, John Kim, Dheevatsa Mudigere, Srinivas Sridharan, Xiaodong Wang, Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair, Isabel Gao, Bor-Yiing Su, Jiyan Yang, Mikhail Smelyanskiy
Paper

Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems

Large-scale training is important to ensure high performance and accuracy of machine-learning models. At Facebook we use many different models, including computer vision, video and language models. However, in this paper we focus on the deep learning recommendation models (DLRMs), which are responsible for more than 50% of the training demand in our data centers.
Paper
August 31, 2020 Shen Li, Yanli Zhao, Rohan Verma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, Soumith Chintala
Paper

PyTorch Distributed: Experiences on Accelerating Data Parallel Training

This paper presents the design, implementation, and evaluation of the PyTorch distributed data parallel module. PyTorch is a widely-adopted scientific computing package used in deep learning research and applications. Recent advances in deep learning argue for the value of large datasets and large models, which necessitates the ability to scale out model training to more computational resources.
Paper
August 27, 2020 Nima Noorshams, Saurabh Verma, Aude Hofleitner
Paper

TIES: Temporal Interaction Embeddings For Enhancing Social Media Integrity At Facebook

In this paper, we present our efforts to protect various social media entities at Facebook from people who try to abuse our platform. We present a novel Temporal Interaction EmbeddingS (TIES) model that is designed to capture rogue social interactions and flag them for further suitable actions. TIES is a supervised, deep learning, production ready model at Facebook-scale networks.
Paper
August 25, 2020 Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, Marcus Rohrbach
Paper

Adversarial Continual Learning

Continual learning aims to learn new tasks without forgetting previously learned ones. We hypothesize that representations learned to solve each task in a sequence have a shared structure while containing some task-specific properties. We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks.
Paper
August 24, 2020 Hang Chu, Shugao Ma, Fernando De la Torre, Sanja Fidler, Yaser Sheikh
Paper

Expressive Telepresence via Modular Codec Avatars

VR telepresence consists of interacting with another human in a virtual space represented by an avatar. Today most avatars are cartoon-like, but soon the technology will allow video-realistic ones. This paper aims in this direction, and presents Modular Codec Avatars (MCA), a method to generate hyper-realistic faces driven by the cameras in the VR headset.
Paper
August 24, 2020 Kevin Musgrave, Serge Belongie, Ser Nam Lim
Paper

A Metric Learning Reality Check

Deep metric learning papers from the past four years have consistently claimed great advances in accuracy, often more than doubling the performance of decade-old methods. In this paper, we take a closer look at the field to see if this is actually true.
Paper