Research Area
Year Published

518 Results

December 10, 2019

PHYRE: A New Benchmark for Physical Reasoning

Neural Information Processing Systems (NeurIPS)

Understanding and reasoning about physics is an important ability of intelligent agents. We develop the PHYRE benchmark for physical reasoning that contains a set of simple classical mechanics puzzles in a 2D physical environment.

By: Anton Bakhtin, Laurens van der Maaten, Justin Johnson, Laura Gustafson, Ross Girshick

December 10, 2019

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers

Neural Information Processing Systems (NeurIPS)

The success of lottery ticket initializations [7] suggests that small, sparsified networks can be trained so long as the network is initialized appropriately. Unfortunately, finding these “winning ticket” initializations is computationally expensive. One potential solution is to reuse the same winning tickets across a variety of datasets and optimizers. However, the generality of winning ticket initializations remains unclear. Here, we attempt to answer this question by generating winning tickets for one training configuration (optimizer and dataset) and evaluating their performance on another configuration.

By: Ari Morcos, Haonan Yu, Michela Paganini, Yuandong Tian

December 10, 2019

Robust Multi-agent Counterfactual Prediction

Neural Information Processing Systems (NeurIPS)

We consider the problem of using logged data to make predictions about what would happen if we changed the ‘rules of the game’ in a multi-agent system. This task is difficult because in many cases we observe actions individuals take but not their private information or their full reward functions. In addition, agents are strategic, so when the rules change, they will also change their actions.

By: Alexander Peysakhovich, Christian Kroer, Adam Lerer

December 10, 2019

Limiting Extrapolation in Linear Approximate Value Iteration

Neural Information Processing Systems (NeurIPS)

We study linear approximate value iteration (LAVI) with a generative model. While linear models may accurately represent the optimal value function using a few parameters, several empirical and theoretical studies show the combination of least-squares projection with the Bellman operator may be expansive, thus leading LAVI to amplify errors over iterations and eventually diverge. We introduce an algorithm that approximates value functions by combining Q-values estimated at a set of anchor states.

By: Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

December 9, 2019

Cold Case: the Lost MNIST Digits

Neural Information Processing Systems (NeurIPS)

Although the popular MNIST dataset [LeCun et al., 1994] is derived from the NIST database [Grother and Hanaoka, 1995], the precise processing steps for this derivation have been lost to time. We propose a reconstruction that is accurate enough to serve as a replacement for the MNIST dataset, with insignificant changes in accuracy. We trace each MNIST digit to its NIST source and its rich metadata such as writer identifier, partition identifier, etc.

By: Chhavi Yadav, Leon Bottou

December 9, 2019

Fixing the train-test resolution discrepancy

Neural Information Processing Systems (NeurIPS)

Data-augmentation is key to the training of neural networks for image classification. This paper first shows that existing augmentations induce a significant discrepancy between the size of the objects seen by the classifier at train and test time: in fact, a lower train resolution improves the classification at test time!

By: Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Hervé Jégou

December 9, 2019

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

Neural Information Processing Systems (NeurIPS)

We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers.

By: Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee

December 9, 2019

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

Neural Information Processing Systems (NeurIPS)

Multi-simulator training has contributed to the recent success of Deep Reinforcement Learning by stabilizing learning and allowing for higher training throughputs. We propose Gossip-based Actor-Learner Architectures (GALA) where several actor-learners (such as A2C agents) are organized in a peer-to-peer communication topology, and exchange information through asynchronous gossip in order to take advantage of a large number of distributed simulators.

By: Mahmoud Assran, Joshua Romoff, Nicolas Ballas, Joelle Pineau, Michael Rabbat

December 8, 2019

Anti-efficient encoding in emergent communication

Neural Information Processing Systems (NeurIPS)

Despite renewed interest in emergent language simulations with neural networks, little is known about the basic properties of the induced code, and how they compare to human language. One fundamental characteristic of the latter, known as Zipf’s Law of Abbreviation (ZLA), is that more frequent words are efficiently associated to shorter strings. We study whether the same pattern emerges when two neural networks, a “speaker” and a “listener”, are trained to play a signaling game.

By: Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, Marco Baroni

December 8, 2019

Learning to Perform Local Rewriting for Combinatorial Optimization

Neural Information Processing Systems (NeurIPS)

Search-based methods for hard combinatorial optimization are often guided by heuristics. Tuning heuristics in various conditions and situations is often time-consuming. In this paper, we propose NeuRewriter that learns a policy to pick heuristics and rewrite the local components of the current solution to iteratively improve it until convergence.

By: Xinyun Chen, Yuandong Tian