Research Area
Year Published

851 Results

July 29, 2019

OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs

Association for Computational Linguistics (ACL)

We study a conversational reasoning model that strategically traverses through a large-scale common fact knowledge graph (KG) to introduce engaging and contextually diverse entities and attributes. For this study, we collect a new Open-ended Dialog ↔ KG parallel corpus called OpenDialKG, where each utterance from 15K human-to-human role-playing dialogs is manually annotated with ground-truth reference to corresponding entities and paths from a large-scale KG with 1M+ facts.

By: Shane Moon, Pararth Shah, Anuj Kumar, Rajen Subba

July 29, 2019

Keeping Notes: Conditional Natural Language Generation with a Scratchpad Mechanism

Association for Computational Linguistics (ACL)

We introduce the Scratchpad Mechanism, a novel addition to the sequence-to-sequence (seq2seq) neural network architecture and demonstrate its effectiveness in improving the overall fluency of seq2seq models for natural language generation tasks.

By: Ryan Y. Benmalek, Madian Khabsa, Suma Desu, Claire Cardie, Michele Banko

July 29, 2019

Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset

Association for Computational Linguistics (ACL)

One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill. While it is straightforward for humans to recognize and acknowledge others’ feelings in a conversation, this is a significant challenge for AI systems due to the paucity of suitable publicly-available datasets for training and evaluation. This work proposes a new benchmark for empathetic dialogue generation and EMPATHETICDIALOGUES, a novel dataset of 25k conversations grounded in emotional situations.

By: Hannah Rashkin, Eric Michael Smith, Margaret Li, Y-Lan Boureau

July 29, 2019

Better Character Language Modeling Through Morphology

Association for Computational Linguistics (ACL)

We incorporate morphological supervision into character language models (CLMs) via multitasking and show that this addition improves bits-per-character (BPC) performance across 24 languages, even when the morphology data and language modeling data are disjoint.

By: Terra Blevins, Luke Zettlemoyer

July 28, 2019

Miss Tools and Mr Fruit: Emergent communication in agents learning about object affordances

Association for Computational Linguistics (ACL)

Recent research studies communication emergence in communities of deep network agents assigned a joint task, hoping to gain insights on human language evolution. We propose here a new task capturing crucial aspects of the human environment, such as natural object affordances, and of human conversation, such as full symmetry among the participants.

By: Diane Bouchacourt, Marco Baroni

July 28, 2019

Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings

Association for Computational Linguistics (ACL)

We consider the task of inferring is-a relationships from large text corpora. For this purpose, we propose a new method combining hyperbolic embeddings and Hearst patterns. This approach allows us to set appropriate constraints for inferring concept hierarchies from distributional contexts while also being able to predict missing is-a-relationships and to correct wrong extractions.

By: Matt Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, Maximilian Nickel

July 28, 2019

Training Hybrid Language Models by Marginalizing over Segmentations

Association for Computational Linguistics (ACL)

In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only.

By: Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, Armand Joulin

July 28, 2019

How to Get Past Sesame Street: Sentence-Level Pretraining Beyond Language Modeling

Association for Computational Linguistics (ACL)

Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling. We conduct the first large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to language modeling.

By: Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman

July 28, 2019

Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation

Association for Computational Linguistics (ACL)

Given a rough, word-by-word gloss of a source language sentence, target language natives can uncover the latent, fully-fluent rendering of the translation. In this work we explore this intuition by breaking translation into a two step process: generating a rough gloss by means of a dictionary and then ‘translating’ the resulting pseudo-translation, or ‘Translationese’ into a fully fluent translation.

By: Nima Pourdamghani, Nada Aldarrab, Marjan Ghazvininejad, Kevin Knight, Jonathan May

July 28, 2019

Learning to Optimize Halide with Tree Search and Random Programs

ACM SIGGRAPH

We present a new algorithm to automatically schedule Halide programs for high-performance image processing and deep learning. We significantly improve upon the performance of previous methods, which considered a limited subset of schedules.

By: Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michaël Gharbi, Benoit Steiner, Steven Johnson, Kayvon Fatahalian, Frédo Durand, Jonathan Ragan-Kelley