Research Area
Year Published

127 Results

December 2, 2018

One-Shot Unsupervised Cross Domain Translation

Conference on Neural Information Processing Systems (NIPS)

Given a single image x from domain A and a set of images from domain B, our task is to generate the analogous of x in B. We argue that this task could be a key AI capability that underlines the ability of cognitive agents to act in the world and present empirical evidence that the existing unsupervised domain translation methods fail on this task.

By: Sagie Benaim, Lior Wolf
November 27, 2018

AnoGen: Deep Anomaly Generator

Outlier Detection De-constructed (ODD) Workshop

Motivated by the continued success of Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GANs) to produce realistic-looking data we provide a platform to generate a realistic time-series with anomalies called AnoGen.

By: Nikolay Laptev
November 27, 2018

Deep Incremental Learning for Efficient High-Fidelity Face Tracking

ACM SIGGRAPH ASIA 2018

In this paper, we present an incremental learning framework for efficient and accurate facial performance tracking. Our approach is to alternate the modeling step, which takes tracked meshes and texture maps to train our deep learning-based statistical model, and the tracking step, which takes predictions of geometry and texture our model infers from measured images and optimize the predicted geometry by minimizing image, geometry and facial landmark errors.

By: Chenglei Wu, Takaaki Shiratori, Yaser Sheikh
November 2, 2018

Jump to better conclusions: SCAN both left and right

Empirical Methods in Natural Language Processing (EMNLP)

Lake and Baroni (2018) recently introduced the SCAN data set, which consists of simple commands paired with action sequences and is intended to test the strong generalization abilities of recurrent sequence-to-sequence models. Their initial experiments suggested that such models may fail because they lack the ability to extract systematic rules. Here, we take a closer look at SCAN and show that it does not always capture the kind of generalization that it was designed for.

By: Joost Bastings, Marco Baroni, Jason Weston, Kyunghyun Cho, Douwe Kiela
November 2, 2018

Do explanations make VQA models more predictable to a human?

Empirical Methods in Natural Language Processing (EMNLP)

A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable ‘explanations’ of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model – its responses as well as failures – more predictable to a human.

By: Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh
November 1, 2018

Horizon: Facebook’s Open Source Applied Reinforcement Learning Platform

ArXiv

In this paper we present Horizon, Facebook’s open source applied reinforcement learning (RL) platform. Horizon is an end-to-end platform designed to solve industry applied RL problems where datasets are large (millions to billions of observations), the feedback loop is slow (vs. a simulator), and experiments must be done with care because they don’t run in a simulator.

By: Jason Gauci, Edoardo Conti, Yitao Liang, Kittipat Virochsiri, Yuchen He, Zachary Kaden, Vivek Narayanan, Xiaohui Ye
November 1, 2018

How agents see things: On visual representations in an emergent language game

Empirical Methods in Natural Language Processing (EMNLP)

There is growing interest in the language developed by agents interacting in emergent-communication settings. Earlier studies have focused on the agents’ symbol usage, rather than on their representation of visual input. In this paper, we consider the referential games of Lazaridou et al. (2017), and investigate the representations the agents develop during their evolving interaction.

By: Diane Bouchacourt, Marco Baroni
November 1, 2018

Non-Adversarial Unsupervised Word Translation

Empirical Methods in Natural Language Processing (EMNLP)

In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods.

By: Yedid Hoshen, Lior Wolf
October 31, 2018

Training Millions of Personalized Dialogue Agents

Empirical Methods in Natural Language Processing (EMNLP)

In this paper we introduce a new dataset providing 5 million personas and 700 million persona-based dialogues.

By: Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, Antoine Bordes
October 31, 2018

Neural Compositional Denotational Semantics for Question Answering

Conference on Empirical Methods in Natural Language Processing (EMNLP)

Answering compositional questions requiring multi-step reasoning is challenging. We introduce an end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics.

By: Nitish Gupta, Mike Lewis