Research Area
Year Published

170 Results

November 1, 2018

Horizon: Facebook’s Open Source Applied Reinforcement Learning Platform

ArXiv

In this paper we present Horizon, Facebook’s open source applied reinforcement learning (RL) platform. Horizon is an end-to-end platform designed to solve industry applied RL problems where datasets are large (millions to billions of observations), the feedback loop is slow (vs. a simulator), and experiments must be done with care because they don’t run in a simulator.

By: Jason Gauci, Edoardo Conti, Yitao Liang, Kittipat Virochsiri, Yuchen He, Zachary Kaden, Vivek Narayanan, Xiaohui Ye

October 31, 2018

Retrieve and Refine: Improved Sequence Generation Models For Dialogue

Workshop on Search-Oriented Conversational AI (SCAI) at EMNLP

In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it – the final sequence generator treating the retrieval as additional context.

By: Jason Weston, Emily Dinan, Alexander H. Miller

October 31, 2018

Extending Neural Generative Conversational Model using External Knowledge Sources

Empirical Methods in Natural Language Processing (EMNLP)

The use of connectionist approaches in conversational agents has been progressing rapidly due to the availability of large corpora. However current generative dialogue models often lack coherence and are content poor. This work proposes an architecture to incorporate unstructured knowledge sources to enhance the next utterance prediction in chit-chat type of generative dialogue models.

By: Prasanna Parthasarathi, Joelle Pineau

October 31, 2018

Simple Fusion: Return of the Language Model

Conference on Machine Translation (WMT)

Neural Machine Translation (NMT) typically leverages monolingual data in training through backtranslation. We investigate an alternative simple method to use monolingual data for NMT training: We combine the scores of a pre-trained and fixed language model (LM) with the scores of a translation model (TM) while the TM is trained from scratch.

By: Felix Stahlberg, James Cross, Veselin Stoyanov

October 31, 2018

Phrase-Based & Neural Unsupervised Machine Translation

Empirical Methods in Natural Language Processing (EMNLP)

Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of bitexts, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model.

By: Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato

October 31, 2018

Auto-Encoding Dictionary Definitions into Consistent Word Embeddings

Empirical Methods in Natural Language Processing (EMNLP)

Monolingual dictionaries are widespread and semantically rich resources. This paper presents a simple model that learns to compute word embeddings by processing dictionary definitions and trying to reconstruct them.

By: Tom Bosc, Pascal Vincent

October 31, 2018

Neural Compositional Denotational Semantics for Question Answering

Conference on Empirical Methods in Natural Language Processing (EMNLP)

Answering compositional questions requiring multi-step reasoning is challenging. We introduce an end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics.

By: Nitish Gupta, Mike Lewis

October 31, 2018

Reference-less Quality Estimation of Text Simplification Systems

International Conference on Natural Language Generation (INLG)

In this paper, we compare multiple approaches to reference-less quality estimation of sentence-level text simplification systems, based on the dataset used for the QATS 2016 shared task. We distinguish three different dimensions: grammaticality, meaning preservation and simplicity. We show that n-gram-based MT metrics such as BLEU and METEOR correlate the most with human judgment of grammaticality and meaning preservation, whereas simplicity is best evaluated by basic length-based metrics.

By: Louis Martin, Samuel Humeau, Pierre-Emmanuel Mazaré, Antoine Bordes, Éric de La Clergerie, Benoit Steiner

October 31, 2018

A Dataset for Telling the Stories of Social Media Videos

Empirical Methods in Natural Language Processing (EMNLP)

Video content on social media platforms constitutes a major part of the communication between people, as it allows everyone to share their stories. However, if someone is unable to consume video, either due to a disability or network bandwidth, this severely limits their participation and communication.

By: Spandana Gella, Mike Lewis, Marcus Rohrbach

October 31, 2018

Semantic Parsing for Task Oriented Dialog using Hierarchical Representations

Conference on Empirical Methods in Natural Language Processing (EMNLP)

Task oriented dialog systems typically first parse user utterances to semantic frames comprised of intents and slots. Previous work on…

By: Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, Mike Lewis