Filter by Research Area
Filter by Research Area
Year Published

236 Results

June 18, 2018

Low-shot learning with large-scale diffusion

Computer Vision and Pattern Recognition (CVPR)

This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time.

By: Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou
June 17, 2018

Neural Baby Talk

Computer Vision and Pattern Recognition (CVPR)

We introduce a novel framework for image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image.

By: Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh
June 1, 2018

The Immersive VR Self: Performance, Embodiment and Presence in Immersive Virtual Reality Environments

Book chapter from A Networked Self and Human Augmentics, AI, Sentience

Virtual avatars are a common way to present oneself in online social interactions. From cartoonish emoticons to hyper-realistic humanoids, these online representations help us portray a certain image to our respective audiences.

By: Raz Schwartz, William Steptoe
June 1, 2018

Colorless Green Recurrent Networks Dream Hierarchically

North American Chapter of the Association for Computational Linguistics (NAACL)

Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions.

By: Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, Marco Baroni
June 1, 2018

QuickEdit: Editing Text & Translations by Crossing Words Out

Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)

We propose a framework for computer-assisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change.

By: David Grangier, Michael Auli
May 24, 2018

A Universal Music Translation Network

ArXiv

We present a method for translating music across musical instruments, genres, and styles. This method is based on a multi-domain wavenet autoencoder, with a shared encoder and a disentangled latent space that is trained end-to-end on waveforms.

By: Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman
May 16, 2018

Deep Learning Coordinated Beamforming for Highly-Mobile Millimeter Wave Systems

arXiv

Supporting high mobility in millimeter wave (mmWave) systems enables a wide range of important applications such as vehicular communications and wireless virtual/augmented reality. Realizing this in practice, though, requires overcoming several challenges.

By: Ahmed Alkhateeb, Sam Alex, Paul Varkey, Ying Li, Qi Qu, Djordje Tujkovic
May 15, 2018

Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization

Proceedings of the IEEE

Motivated by a variety of applications—decentralized estimation in sensor networks, fitting models to massive data sets, and decentralized control of multi-robot systems, to name a few—significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area.

By: Angelica Nedic, Alex Olshevsky, Mike Rabbat
May 8, 2018

Optimization Methods for Large-Scale Machine Learning

SIAM Review

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications.

By: Leon Bottou, Frank E. Curtis, Jorge Nocedal
May 7, 2018

Advances in Pre-Training Distributed Word Representations

Language Resources and Evaluation Conference (LREC)

In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together.

By: Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin