Research Area
Year Published

870 Results

July 28, 2019

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

Annual Meeting of the Association for Computational Linguistics (ACL)

We test here a convolutional network (CNN) on these tasks, reporting hugely improved performance with respect to RNNs. Despite the big improvement, the CNN has however not induced systematic rules, suggesting that the difference between compositional and non-compositional behaviour is not clear-cut.

By: Roberto Dessi, Marco Baroni

July 27, 2019

Unsupervised Question Answering by Cloze Translation

Association for Computational Linguistics (ACL)

Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA.

By: Patrick Lewis, Ludovic Denoyer, Sebastian Riedel

July 27, 2019

The Referential Reader: A Recurrent Entity Network for Anaphora Resolution

Association for Computational Linguistics (ACL)

We present a new architecture for storing and accessing entity mentions during online text processing. While reading the text, entity references are identified, and may be stored by either updating or overwriting a cell in a fixed-length memory.

By: Fei Liu, Luke Zettlemoyer, Jacob Eisenstein

July 27, 2019

Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings

Association for Computational Linguistics (ACL)

Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings.

By: Mikel Artetxe, Holger Schwenk

July 27, 2019

Adaptive Attention Span in Transformers

Association for Computational Linguistics (ACL)

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time.

By: Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin

July 26, 2019

Strategies for Structuring Story Generation

Association for Computational Linguistics (ACL)

Writers often rely on plans or sketches to write long stories, but most current language models generate word by word from left to right. We explore coarse-to-fine models for creating narrative texts of several hundred words, and introduce new models which decompose stories by abstracting over actions and entities.

By: Angela Fan, Mike Lewis, Yann Dauphin

July 26, 2019

On the Distribution of Deep Clausal Embeddings: A Large Cross-linguistic Study

Association for Computational Linguistics (ACL)

We introduce here a collection of large, dependency-parsed written corpora in 17 languages, that allow us, for the first time, to capture clausal embedding through dependency graphs and assess their distribution.

By: Damián E. Blasi, Ryan Cotterell, Lawrence Wolf-Sonkin, Sabine Stoll, Balthasar Bickel, Marco Baroni

July 18, 2019

Tabula nearly rasa: Probing the linguistic knowledge of character-level neural language models trained on unsegmented text

Topology, Algebra and Categories in Logic (TACL)

Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multi-lingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed.

By: Michael Hahn, Marco Baroni

July 18, 2019

Why Build an Assistant in Minecraft?

arXiv

In this document we describe a rationale for a research program aimed at building an open “assistant” in the game
Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue.

By: Arthur Szlam, Jonathan Gray, Kavya Srinet, Yacine Jernite, Armand Joulin, Gabriel Synnaeve, Douwe Kiela, Haonan Yu, Zhuoyuan Chen, Siddharth Goyal, Demi Guo, Danielle Rothermel, Larry Zitnick, Jason Weston

July 17, 2019

CraftAssist: A Framework for Dialogue-enabled Interactive Agents

This paper describes an implementation of a bot assistant in Minecraft, and the tools and platform allowing players to interact with the bot and to record those interactions. The purpose of building such an assistant is to facilitate the study of agents that can complete tasks specified by dialogue, and eventually, to learn from dialogue interactions.

By: Jonathan Gray, Kavya Srinet, Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo, Siddharth Goyal, Larry Zitnick, Arthur Szlam