October 22, 2017

Inferring and Executing Programs for Visual Reasoning

International Conference on Computer Vision (ICCV)

Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer.

Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, Larry Zitnick, Ross Girshick
September 7, 2017

Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog

Conference on Empirical Methods in Natural Language Processing (EMNLP)

In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional.

Satwik Kottur, José M.F. Moura, Stefan Lee, Dhruv Batra
August 6, 2017

Efficient Softmax Approximation for GPUs

International Conference on Machine Learning (ICML)

We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies.

Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Hervé Jégou
July 31, 2017

Enriching Word Vectors with Subword Information

TACL, Association for Computational Linguistics (ACL 2017)

Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn […]

Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov
July 31, 2017

Learning Multilingual Joint Sentence Embeddings with Neural Machine Translation

ACL workshop on Representation Learning for NLP (ACL)

In this paper, we use the framework of neural machine translation to learn joint sentence representations across six very different languages. Our aim is that a representation which is independent of the language, is likely to capture the underlying semantics.

Holger Schwenk, Matthijs Douze
July 30, 2017

Reading Wikipedia to Answer Open-Domain Questions

Association for Computational Linguistics (ACL 2017)

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes
July 30, 2017

Automatically Generating Rhythmic Verse with Neural Networks

Association for Computational Linguistics (ACL 2017)

We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms.

Jack Hopkins
April 3, 2017

Bag of Tricks for Efficient Text Classification

European Chapter of the Association for Computational Linguistics (EACL)

This paper explores a simple and efficient baseline for text classification.

Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov
November 1, 2016

Neural Text Generation from Structured Data with Application to the Biography Domain

Empirical Methods in Natural Language Processing (EMNLP)

This paper introduces a neural model for concept-to-text generation that scales to large, rich domains.

Remi Lebret, David Grangier, Michael Auli
October 28, 2016

Bilingual Methods for Adaptive Training Data Selection for Machine Translation

Association for the Machine Translation in Americas

We propose a new data selection method which uses semi-supervised convolutional neural networks based on bitokens (Bi-SSCNNs) for training machine translation systems from a large bilingual corpus.us.

Boxing Chen, Roland Kuhn, George Foster, Colin Cherry, Fei Huang
September 8, 2016

Joint Learning of Speaker and Phonetic Similarities with Siamese Networks

Interspeech 2016

We scale up the feasibility of jointly learning specialized speaker and phone embeddings architectures to the 360 hours of the Librispeech corpus by implementing a sampling method to efficiently select pairs of words from the dataset and improving the loss function.

Neil Zeghidour, Gabriel Synnaeve, Nicolas Usunier, Emmanuel Dupoux
August 11, 2016

Semi-supervised Convolutional Networks for Translation Adaptation with Tiny Amount of In-domain Data

Conference on Natural Language Learning

We propose a method which uses semi-supervised convolutional neural networks (CNNs) to select in-domain training data for statistical machine translation.

Boxing Chen, Fei Huang
August 10, 2016

Neural Network-based Word Alignment through Score Aggregation

Association for Computational Linguistics Conference on Machine Translation

We present a simple neural network for word alignment that builds source and target word window representations to compute alignment scores for sentence pairs.

Michael Auli, Ronan Collobert, Joel Legrand
June 8, 2016

Key-Value Memory Networks for Directly Reading Documents

EMNLP 2016

This paper introduces a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation.

Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, Jason Weston
April 13, 2016

Abstractive Summarization with Attentive RNN – NAACL 2016

NAACL 2016

Abstractive sentence summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence.

Sumit Chopra, Michael Auli, Alexander M. Rush
April 1, 2016

The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations

ICLR 2016

We introduce a new test of how well language models capture meaning in children’s books.

Felix Hill, Antoine Bordes, Sumit Chopra, Jason Weston
September 17, 2015

Improved Arabic Dialect Classification with Social Media Data

Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

Arabic dialect classification has been an important and challenging problem for Arabic language processing, especially for social media text analysis and machine translation. In this paper we propose an approach to improving Arabic dialect classification with semi-supervised learning: multiple classifiers are trained with weakly supervised, strongly supervised, and unsupervised data. Their combination yields significant and consistent improvement on two different test sets.

Fei Huang
December 4, 2014

Extracting Translation Pairs from Social Network Content

International Workshop on Spoken Language Translation

We describe two methods to collect translation pairs from public Facebook content. We use the extracted translation pairs as additional training data for machine translation systems and we can show significant improvements.

Matthias Eck, Yury Zemlyanskiy, Joy Zhang, Alex Waibel
September 4, 2014

Question Answering with Subgraph Embeddings

Empirical Methods in Natural Language Processing

This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers.

Antoine Bordes, Jason Weston, Sumit Chopra
September 4, 2014

#TagSpace: Semantic Embeddings from Hashtags

Empirical Methods in Natural Language Processing

We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags.

Jason Weston, Sumit Chopra, Keith Adams