July 30, 2017

A Convolutional Encoder Model for Neural Machine Translation

Association for Computational Linguistics 2017

The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. We present a faster and simpler architecture based on a succession of convolutional layers.

Jonas Gehring, Michael Auli, David Grangier, Yann Dauphin
July 30, 2017

Reading Wikipedia to Answer Open-Domain Questions

Association for Computational Linguistics (ACL 2017)

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes
July 22, 2017

Densely Connected Convolutional Networks

CVPR 2017

In this paper, we embrace the observation that hat convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output, and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.

Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
July 21, 2017

Discovering Causal Signals in Images

CVPR 2017

This paper establishes the existence of observable footprints that reveal the “causal dispositions” of the object categories appearing in collections of images.

David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Scholkopf, Leon Bottou
July 21, 2017

Relationship Proposal Networks

Conference on Computer Vision and Pattern Recognition 2017

In this paper we address the challenges of image scene object recognition by using pairs of related regions in images to train a relationship proposer that at test time produces a manageable number of related regions.

Ahmed Elgammal, Ji Zhang, Mohamed Elhoseiny, Scott Cohen, Walter Chang
July 21, 2017

Link the head to the “beak”: Zero Shot Learning from Noisy Text Description at Part Precision

CVPR 2017

In this paper, we study learning visual classifiers from unstructured text descriptions at part precision with no training images. We propose a learning framework that is able to connect text terms to its relevant parts and suppress connections to non-visual text terms without any part-text annotations. F

Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, Ahmed Elgammal
July 21, 2017

CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

CVPR 2017

We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires.

Bharath Hariharan, Justin Johnson, Larry Zitnick, Laurens van der Maaten, Li Fei-Fei, Ross Girshick
July 21, 2017

Learning Features by Watching Objects Move

CVPR 2017

This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation.

Deepak Pathak, Ross Girshick, Piotr Dollar, Trevor Darrell, Bharath Hariharan
July 21, 2017

Feature Pyramid Networks for Object Detection

CVPR 2017

In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost.

Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie
July 21, 2017

Semantic Amodal Segmentation

CVPR 2017

Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition?

Yan Zhu, Yuandong Tian, Dimitris Mexatas, Piotr Dollar
July 21, 2017

Aggregated Residual Transformations for Deep Neural Networks

CVPR 2017

We present a simple, highly modularized network architecture for image classification.

Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, Kaiming He
June 8, 2017

Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour

Data @ Scale

In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization.

Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He
May 21, 2017

CAN: Creative Adversarial Networks

IEEE International Conference on Communications (ICCC)

We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution.

Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, Marian Mazzone
April 24, 2017

Training Agent for First-Person Shooter Game With Actor-Critic Curriculum Learning

International Conference on Learning Representations (ICLR) 2017

In this paper, we propose a new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom.

Yuxin Wu, Yuandong Tian
April 24, 2017

Unsupervised Cross-Domain Image Generation

International Conference on Learning Representations (ICLR) 2017

We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given representation function f, which accepts inputs in either domains, would remain unchanged.

Yaniv Taigman, Adam Polyak, Lior Wolf
April 24, 2017

An Analytical Formula of Population Gradient for Two-Layered ReLU network and its Applications in Convergence and Critical Point Analysis

International Conference on Learning Representations (ICLR) 2017

In this paper, we explore theoretical properties of training a two-layered ReLU network g(x; w) = PK j=1 σ(w | j x) with centered d-dimensional spherical Gaussian input x (σ=ReLU). We train our network with gradient descent on w to mimic the output of a teacher network with the same architecture and fixed parameters w∗.

Yuandong Tian
April 24, 2017

LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation

International Conference on Learning Representations (ICLR)

We present LR-GAN: an adversarial image generation model which takes scene structure and context into account.

Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh
April 24, 2017

Towards Principled Methods for Training Generative Adversarial Networks

International Conference on Learning Representations (ICLR) 2017

The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks.

Leon Bottou, Martin Arjovsky
April 24, 2017

Improving Neural Language Models with a Continuous Cache

International Conference on Learning Representations (ICLR) 2017

We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation.

Armand Joulin, Edouard Grave, Nicolas Usunier
April 24, 2017

Revisiting Classifier Two-Sample Tests for GAN Evaluation and Causal Discovery

International Conference on Learning Representations (ICLR) 2017

In this paper, we aim to revive interest in the use of binary classifiers for two-sample testing. To this end, we review their fundamentals, previous literature on their use, compare their performance against alternative state-of-the-art two-sample tests, and propose them to evaluate generative adversarial network models applied to image synthesis.

David Lopez-Paz, Maxime Oquab