All Research Areas
Research Areas
Year Published

129 Results

June 18, 2016

Learning Physical Intuition of Block Towers by Example

International Conference on Machine Learning

Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics.

By: Adam Lerer, Sam Gross, Rob Fergus
June 8, 2016

Key-Value Memory Networks for Directly Reading Documents

EMNLP 2016

This paper introduces a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation.

By: Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, Jason Weston
May 2, 2016

Deep Multi-Scale Video Prediction Beyond Mean Square Error

ICLR 2016

The paper is about predicting future frames in video sequences given the previous frames.

By: Michael Mathieu, Camille Couprie, Yann LeCun
May 2, 2016

Predicting Distributions with Linearizing Belief Networks

ICLR 2016: International Conference on Learning Representation

This work introduces a new family of networks called linearizing belief nets or LBNs.

By: David Grangier
May 2, 2016

Metric Learning with Adaptive Density Discrimination


Distance metric learning approaches learn a transformation to a representation space in which distance is in correspondence with a predefined notion of similarity.

By: Oren Rippel, Manohar Paluri, Piotr Dollar, Lubomir Bourdev
April 19, 2016

Evaluating Prerequisite Qualities for Learning End-to-end Dialog Systems

ICLR 2016

An approach for testing the abilities of conversational agents using question-answering over a knowledge base, personalized recommendations, and natural conversation.

By: Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, Jason Weston
April 13, 2016

Abstractive Summarization with Attentive RNN

NAACL 2016

Abstractive sentence summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence.

By: Sumit Chopra, Michael Auli, Alexander M. Rush
April 1, 2016

The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations

ICLR 2016

We introduce a new test of how well language models capture meaning in children’s books.

By: Felix Hill, Antoine Bordes, Sumit Chopra, Jason Weston
January 7, 2016

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

2016 International Conference on Learning Representations

We stabilize Generative Adversarial networks with some architectural constraints and visualize the internals of the networks.

By: Alec Radford, Luke Metz, Soumith Chintala
December 15, 2015

Learning to Segment Object Candidates


In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training.

By: Pedro Oliveira, Ronan Collobert, Piotr Dollar