December 5, 2016

Advancing AI at NIPS 2016

By: Kelly Berschauer

Deep learning has been at the root of significant progress in many application areas, such as computer perception, computer vision and natural language processing. Almost all of these systems currently use supervised learning with human-curated data labeling. The challenge of the next several years is to teach machines to learn from raw, unlabeled data, such as images, videos and text. Intelligent systems today do not possess “common sense”, which humans and animals acquire by observing the world, acting in it, and understanding the physical constraints of it. This is the focus of Facebook Artificial Intelligence Research (FAIR) Director, Yann LeCun’s keynote address at NIPS in Barcelona, Spain this week.

NIPS 2016, the thirtieth annual Neural Information Processing Systems conference, is arguably the premier conference on machine learning (ML) and computational neuroscience. It’s importance to practitioners of ML and artificial intelligence (AI) is even more significant with the growing interest in deep learning in both academia and industry.

AI is proving to be an increasingly important factor in realizing Facebook’s mission to enable communication between people. “People today are bombarded with information from friends, news organizations, websites, etc. it’s essential to help them sift through this mass of information. But that requires knowledge of what they are interested in, what motivates them, what entertains them, and what makes them learn new things. This understanding is something that only AI can provide,” said LeCun.

At Facebook, we are already using AI to organize information within the News Feed, to give descriptions of pictures to the visually impaired, to read text via a new DeepText engine that can understand with near-human accuracy the content in thousands of posts per second–across more than 20 languages, and even to produce population density maps from satellite images so we can help connect more people to the internet. Two weeks ago, we put another AI capability in the palm of your hand when we launched Caffe2Go, a feature to automatically repaint your favorite photo or video in the style of your favorite artist. “Previously, the artificial intelligence required to do this would have necessitated massive server infrastructure and compute capability,” said Joaquin Quinonero Candela, Director of Applied Machine Learning at Facebook.

These technology advancements are derived from tight collaborations between the FAIR and Applied Machine Learning researchers and engineers at Facebook who are experts at identifying opportunities and applying new science to existing Facebook products. Many members of these teams will be on hand at NIPS to share their findings and collaborate with their peers, in particular by presenting their work in poster sessions, and hosting workshops that explore topics such as integrating neural and symbolic approaches to cognitive computation, machine intelligence, and large scale computer vision systems.

In his keynote, LeCun will share his thoughts on predictive models and unsupervised learning, as well as new approaches such as adversarial training that he believes are the future to making significant progress in AI. This will also be a topic of further discussion in a Deep Learning Symposium held Thursday morning that will be jointly run with Yoshua Bengio from the University of Montreal, Navdeep Jaitly from Google Brain, and Roger Grosse from the University of Toronto.

“As long as the problem of predictive learning is not resolved, we will not have machines that are truly intelligent. It is a fundamental scientific and mathematical question, not just a technological challenge. Solving this problem could take many years or decades. In truth, we don’t really know,” said LeCun. “It will require contributions from the tech industry, academics and the government. And it has to be done in the open, that’s why publishing and participating in conferences such as NIPS where we can openly share knowledge and collaborate on new ideas are so important to advancing the field.”

Posters and Workshops hosted by Facebook researchers include:


Disentangling factors of variation in deep representation using adversarial training
Michael F Mathieu · Zhizhen Zhao · Aditya Ramesh · Pablo Sprechmann · Yann LeCun

The Product Cut
Thomas Laurent · James von Brecht · Xavier Bresson · Arthur Szlam

Learning Multiagent Communication with Backpropagation
Sainbayar Sukhbaatar · Arthur Szlam · Rob Fergus

Dialog-based Language Learning
Jason E Weston


Deep Learning Symposium
Yoshua Bengio · Yann LeCun · Navdeep Jaitly · Roger B Grosse


Cognitive Computation: Integrating Neural and Symbolic Approaches
Tarek R. Besold · Antoine Bordes · Gregory Wayne · Artur Garcez

Extreme Classification: Multi-class and Multi-label Learning in Extremely Large Label Spaces
Moustapha Cisse · Manik Varma · Samy Bengio

Intuitive Physics
Adam Lerer · Jiajun Wu · Josh Tenenbaum · Emmanuel Dupoux · Rob Fergus

Machine Intelligence @ NIPS
Tomas Mikolov · Baroni Marco · Armand Joulin · Germán Kruszewski · Angeliki Lazaridou · Klemen Simonic· Allan Jabri

Adversarial Training
David Lopez-Paz · Leon Bottou · Alec Radford, includes poster presentation Semantic Segmentation using Adversarial Networks by Pauline Luc · Camille Couprie ·  Soumith Chintala ·  Jakob Verbeek

Let’s Discuss: Learning Methods for Dialogue
Hal Daume III · Paul Mineiro · Amanda Stent · Jason E Weston

Large Scale Computer Vision Systems
Manohar Paluri · Lorenzo Torresani · Gal Chechik · Dario Garcia · Du Tran, includes poster presentation Population Density Estimation with Deconvolutional Neural Networks by Amy Zhang · Xiaming Liu · Tobias Tiecke · Andreas Gros

Machine Learning Systems
Aparna Lakshmiratan · Li Erran Li · Siddhartha Sen · Sarah Bird · Hussein Mehanna

Learning with Tensors: Why Now and How?
Anima Anandkumar · Rong Ge · Yan Liu · Maximilian Nickel · Qi (Rose) Yu