Filter by Research Area
Filter by Research Area
Year Published

236 Results

April 30, 2018

Unsupervised Machine Translation Using Monolingual Corpora Only

International Conference on Learning Representations (ICLR)

Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data.

By: Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato
April 30, 2018

When is a Convolutional Filter Easy to Learn?

International Conference on Learning Representations (ICLR)

We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input.

By: Simon S. Du, Jason D. Lee, Yuandong Tian
April 30, 2018

Residual Connections Encourage Iterative Inference

International Conference on Learning Representations (ICLR)

Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect.

By: Stanislaw Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, Yoshua Bengio
April 30, 2018

Building Generalizable Agents with a Realistic and Rich 3D Environment

International Conference on Learning Representations (ICLR)

Teaching an agent to navigate in an unseen 3D environment is a challenging task, even in the event of simulated environments. To generalize to unseen environments, an agent needs to be robust to low-level variations (e.g. color, texture, object changes), and also high-level variations (e.g. layout changes of the environment). To improve overall generalization, all types of variations in the environment have to be taken under consideration via different level of data augmentation steps.

By: Yi Wu, Yuxin Wu, Georgia Gkioxari, Yuandong Tian
April 30, 2018

Word Translation Without Parallel Data

International Conference on Learning Representations (ICLR)

In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way.

By: Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou
April 30, 2018

VoiceLoop: Voice Fitting and Synthesis via a Phonolgoical Loop

International Conference on Learning Representations (ICLR)

We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself.

By: Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
April 21, 2018

Speech Communication through the Skin: Design of Learning Protocols and Initial Findings

Computer Human Interaction (CHI)

This study reports the design and testing of learning protocols with a system that translates English phonemes to haptic stimulation patterns (haptic symbols).

By: Jaehong Jung, Yang Jiao, Frederico M. Severgnini, Hong Z Tan, Charlotte M. Reed, Ali Israr, Frances Lau, Freddy Abnousi
April 15, 2018

Learning Filterbanks from Raw Speech for Phone Recognition

International Conference on Acoustics, Speech and Signal Processing (ICASSP)

We train a bank of complex filters that operates on the raw waveform and is fed into a convolutional neural network for end-to-end phone recognition.

By: Neil Zeghidour, Nicolas Usunier, Iasonas Kokkinos, Thomas Schatz, Gabriel Synnaeve, Emmanuel Dupoux
April 3, 2018

DesIGN: Design Inspiration from Generative Networks

ArXive

Can an algorithm create original and compelling fashion designs to serve as an inspirational assistant? To help answer this question, we design and investigate different image generation models associated with different loss functions to boost creativity in fashion generation.

By: Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann LeCun, Camille Couprie
March 19, 2018

Randomized algorithms for distributed computation of principal component analysis and singular value decomposition

Advances in Computational Mathematics

Randomized algorithms provide solutions to two ubiquitous problems: (1) the distributed calculation of a principal component analysis or singular value decomposition of a highly rectangular matrix, and (2) the distributed calculation of a low-rank approximation (in the form of a singular value decomposition) to an arbitrary matrix.

By: Huamin Li, Yuval Kluger, Mark Tygert