Research Area
Year Published

265 Results

October 31, 2018

Neural Compositional Denotational Semantics for Question Answering

Conference on Empirical Methods in Natural Language Processing (EMNLP)

Answering compositional questions requiring multi-step reasoning is challenging. We introduce an end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics.

By: Nitish Gupta, Mike Lewis
October 31, 2018

Semantic Parsing for Task Oriented Dialog using Hierarchical Representations

Conference on Empirical Methods in Natural Language Processing (EMNLP)

Task oriented dialog systems typically first parse user utterances to semantic frames comprised of intents and slots. Previous work on…

By: Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, Mike Lewis
October 31, 2018

Extending Neural Generative Conversational Model using External Knowledge Sources

Empirical Methods in Natural Language Processing (EMNLP)

The use of connectionist approaches in conversational agents has been progressing rapidly due to the availability of large corpora. However current generative dialogue models often lack coherence and are content poor. This work proposes an architecture to incorporate unstructured knowledge sources to enhance the next utterance prediction in chit-chat type of generative dialogue models.

By: Prasanna Parthasarathi, Joelle Pineau
October 31, 2018

Auto-Encoding Dictionary Definitions into Consistent Word Embeddings

Empirical Methods in Natural Language Processing (EMNLP)

Monolingual dictionaries are widespread and semantically rich resources. This paper presents a simple model that learns to compute word embeddings by processing dictionary definitions and trying to reconstruct them.

By: Tom Bosc, Pascal Vincent
October 31, 2018

A Dataset for Telling the Stories of Social Media Videos

Empirical Methods in Natural Language Processing (EMNLP)

Video content on social media platforms constitutes a major part of the communication between people, as it allows everyone to share their stories. However, if someone is unable to consume video, either due to a disability or network bandwidth, this severely limits their participation and communication.

By: Spandana Gella, Mike Lewis, Marcus Rohrbach
October 31, 2018

Retrieve and Refine: Improved Sequence Generation Models For Dialogue

Workshop on Search-Oriented Conversational AI (SCAI) at EMNLP

In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it – the final sequence generator treating the retrieval as additional context.

By: Jason Weston, Emily Dinan, Alexander H. Miller
October 30, 2018

Scaling Neural Machine Translation

Conference on Machine Translation (WMT)

Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8- GPU machine with careful tuning and implementation.

By: Myle Ott, Sergey Edunov, David Grangier, Michael Auli
October 29, 2018

XNLI: Evaluating Cross-lingual Sentence Representations

Empirical Methods in Natural Language Processing (EMNLP)

In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu.

By: Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, Ves Stoyanov
October 26, 2018

SING: Symbol-to-Instrument Neural Generator

Conference on Neural Information Processing Systems (NIPS)

In this work, we study the more computationally efficient alternative of generating the waveform frame-by-frame with large strides. We present SING, a lightweight neural audio synthesizer for the original task of generating musical notes given desired instrument, pitch and velocity.

By: Alexandre Défossez, Neil Zeghidour, Nicolas Usunier, Leon Bottou, Francis Bach
September 12, 2018

NAM: Non-Adversarial Unsupervised Domain Mapping

European Conference on Computer Vision (ECCV)

Several methods were recently proposed for the task of translating images between domains without prior knowledge in the form of correspondences. The existing methods apply adversarial learning to ensure that the distribution of the mapped source domain is indistinguishable from the target domain, which suffers from known stability issues. In addition, most methods rely heavily on “cycle” relationships between the domains, which enforce a one-to-one mapping. In this work, we introduce an alternative method: Non-Adversarial Mapping (NAM), which separates the task of target domain generative modeling from the cross-domain mapping task.

By: Yedid Hoshen, Lior Wolf