October 22, 2017

Mask R-CNN

International Conference on Computer Vision (ICCV)

We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition.

Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick
October 22, 2017

Unsupervised Creation of Parameterized Avatars

International Conference on Computer Vision (ICCV)

We study the problem of mapping an input image to a tied pair consisting of a vector of parameters and an image that is created using a graphical engine from the vector of parameters. The mapping’s objective is to have the output image as similar as possible to the input image. During training, no supervision is given in the form of matching inputs and outputs.

Lior Wolf, Yaniv Taigman, Adam Polyak
October 22, 2017

Learning to Reason: End-to-End Module Networks for Visual Question Answering

International Conference on Computer Vision (ICCV)

Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems. For example, to answer “is there an equal number of balls and boxes?” we can look for balls, look for boxes, count them, and compare the results. The recently proposed Neural Module Network (NMN) architecture implements this approach to question answering by parsing questions into linguistic substructures and assembling question-specific deep networks from smaller modules that each solve one subtask.

Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Kate Saenko
October 22, 2017

Focal Loss for Dense Object Detection

International Conference on Computer Vision (ICCV)

The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case.

Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollar
October 22, 2017

Dense and Low-Rank Gaussian CRFs using Deep Embeddings

International Conference on Computer Vision (ICCV)

In this work we introduce a structured prediction model that endows the Deep Gaussian Conditional Random Field (G-CRF) with a […]

Siddhartha Chandra, Nicolas Usunier, Iasonas Kokkinos
October 22, 2017

Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training

International Conference on Computer Vision (ICCV)

While strong progress has been made in image captioning recently, machine and human captions are still quite distinct. This is primarily due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans – rightfully so – generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not explicitly considered in today’s systems. To address these challenges, we change the training objective of the caption generator from reproducing ground-truth captions to generating a set of captions that is indistinguishable from human written captions.

Rakshith Shetty, Marcus Rohrbach, Lisa Anne Hendricks, Mario Fritz, Bernt Schiele
October 5, 2017

STARDATA: a StarCraft AI Research Dataset

Association for the Advancement of Artificial Intelligence Digital Entertainment Conference

We release a dataset of 65646 StarCraft replays that contains 1535 million frames and 496 million player actions. We provide full game state data along with the original replays that can be viewed in StarCraft. The game state data was recorded every 3 frames which ensures suitability for a wide variety of machine learning tasks such as strategy classification, inverse reinforcement learning, imitation learning, forward modeling, partial information extraction, and others. We illustrate the diversity of the data with various statistics and provide examples of tasks that benefit from the dataset.

Zeming Lin, Jonas Gehring, Vasil Khalidov, Gabriel Synnaeve
September 7, 2017

Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog

Conference on Empirical Methods in Natural Language Processing (EMNLP)

In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional.

Satwik Kottur, José M.F. Moura, Stefan Lee, Dhruv Batra
August 6, 2017

Wasserstein Generative Adversarial Networks

International Conference on Machine Learning (ICML)

We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.

Martin Arjovsky, Soumith Chintala, Leon Bottou
August 6, 2017

Language Modeling with Gated Convolutional Networks

International Conference on Machine Learning (ICML)

The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens.

Yann Dauphin, Angela Fan, Michael Auli, David Grangier
August 6, 2017

Efficient Softmax Approximation for GPUs

International Conference on Machine Learning (ICML)

We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies.

Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Hervé Jégou
July 31, 2017

Enriching Word Vectors with Subword Information

TACL, Association for Computational Linguistics (ACL 2017)

Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn […]

Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov
July 31, 2017

Learning Multilingual Joint Sentence Embeddings with Neural Machine Translation

ACL workshop on Representation Learning for NLP (ACL)

In this paper, we use the framework of neural machine translation to learn joint sentence representations across six very different languages. Our aim is that a representation which is independent of the language, is likely to capture the underlying semantics.

Holger Schwenk, Matthijs Douze
July 30, 2017

Reading Wikipedia to Answer Open-Domain Questions

Association for Computational Linguistics (ACL 2017)

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes
July 30, 2017

Automatically Generating Rhythmic Verse with Neural Networks

Association for Computational Linguistics (ACL 2017)

We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms.

Jack Hopkins, Douwe Kiela
July 22, 2017

Densely Connected Convolutional Networks

CVPR 2017

In this paper, we embrace the observation that hat convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output, and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.

Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
July 21, 2017

Discovering Causal Signals in Images

CVPR 2017

This paper establishes the existence of observable footprints that reveal the “causal dispositions” of the object categories appearing in collections of images.

David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Scholkopf, Leon Bottou
July 21, 2017

Relationship Proposal Networks

Conference on Computer Vision and Pattern Recognition 2017

Image scene understanding requires learning the relationships between objects in the scene. A scene with many objects may have only a few individual interacting objects (e.g., in a party image with many people, only a handful of people might be speaking with each other). To detect all relationships, it would be inefficient to first detect all individual objects and then classify all pairs; not only is the number of all pairs quadratic, but classification requires limited object categories, which is not scalable for real-world images. In this paper we address these challenges by using pairs of related regions in images to train a relationship proposer that at test time produces a manageable number of related regions.

Ji Zhang, Mohamed Elhoseiny, Scott Cohen, Walter Chang, Ahmed Elgammal
July 21, 2017

Link the head to the “beak”: Zero Shot Learning from Noisy Text Description at Part Precision

CVPR 2017

In this paper, we study learning visual classifiers from unstructured text descriptions at part precision with no training images. We propose a learning framework that is able to connect text terms to its relevant parts and suppress connections to non-visual text terms without any part-text annotations. F

Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, Ahmed Elgammal
July 21, 2017

CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

CVPR 2017

e present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires.

Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, Larry Zitnick, Ross Girshick