All Research Areas
Research Areas
Year Published

172 Results

June 1, 2018

The Immersive VR Self: Performance, Embodiment and Presence in Immersive Virtual Reality Environments

Book chapter from A Networked Self and Human Augmentics, AI, Sentience

Virtual avatars are a common way to present oneself in online social interactions. From cartoonish emoticons to hyper-realistic humanoids, these online representations help us portray a certain image to our respective audiences.

By: Raz Schwartz, William Steptoe
June 1, 2018

QuickEdit: Editing Text & Translations by Crossing Words Out

Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)

We propose a framework for computer-assisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change.

By: David Grangier, Michael Auli
May 24, 2018

A Universal Music Translation Network

ArXiv

We present a method for translating music across musical instruments, genres, and styles. This method is based on a multi-domain wavenet autoencoder, with a shared encoder and a disentangled latent space that is trained end-to-end on waveforms.

By: Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman
May 7, 2018

Advances in Pre-Training Distributed Word Representations

Language Resources and Evaluation Conference (LREC)

In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together.

By: Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin
May 2, 2018

Exploring the Limits of Weakly Supervised Pretraining

ArXiv

In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images.

By: Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, Laurens van der Maaten
April 30, 2018

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

International Conference on Learning Representations (ICLR)

We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does…

By: Shiyu Liang, Yixuan Li, R. Srikant
April 30, 2018

VoiceLoop: Voice Fitting and Synthesis via a Phonolgoical Loop

International Conference on Learning Representations (ICLR)

We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself.

By: Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
April 30, 2018

Graph Attention Networks

International Conference on Learning Representations (ICLR)

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.

By: Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio
April 30, 2018

Empirical Analysis of the Hessian of Over-Parametrized Neural Networks

International Conference on Learning Representations (ICLR)

We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et al. (2016): Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers.

By: Levent Sagun, Utku Evci, V. Ugur Güney, Yann Dauphin, Leon Bottou
April 30, 2018

Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play

International Conference on Learning Representations (ICLR)

We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another.

By: Sainbayar Sukhbaatar, Zeming Lin, Ilya Kostrikov, Gabriel Synnaeve, Arthur Szlam, Rob Fergus