Research Area
Year Published

851 Results

July 28, 2019

What makes a good conversation? How controllable attributes affect human judgments

North American Chapter of the Association for Computational Linguistics (NAACL)

In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking.

By: Abigail See, Stephen Roller, Douwe Kiela, Jason Weston

July 28, 2019

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

Annual Meeting of the Association for Computational Linguistics (ACL)

We test here a convolutional network (CNN) on these tasks, reporting hugely improved performance with respect to RNNs. Despite the big improvement, the CNN has however not induced systematic rules, suggesting that the difference between compositional and non-compositional behaviour is not clear-cut.

By: Roberto Dessi, Marco Baroni

July 28, 2019

CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication

Association for Computational Linguistics (ACL)

In this work, we propose a goal-driven collaborative task that combines language, perception, and action. Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects.

By: Jin-Hwa Kim, Nikita Kitaev, Xinlei Chen, Marcus Rohrbach, Byoung-Tak Zhang, Yuandong Tian, Dhruv Batra, Devi Parikh

July 28, 2019

ELI5: Long Form Question Answering

Association for Computational Linguistics (ACL)

We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum “Explain Like I’m Five” (ELI5) where an online community provides answers to questions which are comprehensible by five year olds.

By: Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli

July 27, 2019

Unsupervised Question Answering by Cloze Translation

Association for Computational Linguistics (ACL)

Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA.

By: Patrick Lewis, Ludovic Denoyer, Sebastian Riedel

July 27, 2019

The Referential Reader: A Recurrent Entity Network for Anaphora Resolution

Association for Computational Linguistics (ACL)

We present a new architecture for storing and accessing entity mentions during online text processing. While reading the text, entity references are identified, and may be stored by either updating or overwriting a cell in a fixed-length memory.

By: Fei Liu, Luke Zettlemoyer, Jacob Eisenstein

July 27, 2019

Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings

Association for Computational Linguistics (ACL)

Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings.

By: Mikel Artetxe, Holger Schwenk

July 27, 2019

Adaptive Attention Span in Transformers

Association for Computational Linguistics (ACL)

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time.

By: Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin

July 26, 2019

Strategies for Structuring Story Generation

Association for Computational Linguistics (ACL)

Writers often rely on plans or sketches to write long stories, but most current language models generate word by word from left to right. We explore coarse-to-fine models for creating narrative texts of several hundred words, and introduce new models which decompose stories by abstracting over actions and entities.

By: Angela Fan, Mike Lewis, Yann Dauphin

July 26, 2019

On the Distribution of Deep Clausal Embeddings: A Large Cross-linguistic Study

Association for Computational Linguistics (ACL)

We introduce here a collection of large, dependency-parsed written corpora in 17 languages, that allow us, for the first time, to capture clausal embedding through dependency graphs and assess their distribution.

By: Damián E. Blasi, Ryan Cotterell, Lawrence Wolf-Sonkin, Sabine Stoll, Balthasar Bickel, Marco Baroni