All Research Areas
Research Areas
Year Published

527 Results

April 30, 2018

Identifying Analogies Across Domains

International Conference on Learning Representations (ICLR)

In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques.

By: Yedid Hoshen, Lior Wolf
April 30, 2018

Emergent Communication in a Multi-Modal, Multi-Step Referential Game

International Conference on Learning Representations (ICLR)

Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration.

By: Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho
April 30, 2018

Graph Attention Networks

International Conference on Learning Representations (ICLR)

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.

By: Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio
April 30, 2018

Easing Non-Convex Optimization with Neural Networks

International Conference on Learning Representations (ICLR)

In this paper, we use a deep neural network with D parameters to parametrize the input space of a generic d-dimensional nonconvex optimization problem. Our experiments show that minimizing the over-parametrized D ≫ d variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.

By: David Lopez-Paz, Levent Sagun
April 30, 2018

The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings

International Conference on Learning Representations (ICLR)

We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed.

By: Tomer Galanti, Lior Wolf, Sagie Benaim
April 30, 2018

NAM – Unsupervised Cross-Domain Image Mapping without Cycles or GANs

International Conference on Learning Representations (ICLR)

In this work, we introduce an alternative method: NAM. NAM relies on a pre-trained generative model of the source domain, and aligns each target image with an image sampled from the source distribution while jointly optimizing the domain mapping function. Experiments are presented validating the effectiveness of our method.

By: Yedid Hoshen, Lior Wolf
April 30, 2018

Unsupervised Machine Translation Using Monolingual Corpora Only

International Conference on Learning Representations (ICLR)

Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data.

By: Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato
April 30, 2018

When is a Convolutional Filter Easy to Learn?

International Conference on Learning Representations (ICLR)

We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input.

By: Simon S. Du, Jason D. Lee, Yuandong Tian
April 30, 2018

Residual Connections Encourage Iterative Inference

International Conference on Learning Representations (ICLR)

Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect.

By: Stanislaw Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, Yoshua Bengio
April 30, 2018

Building Generalizable Agents with a Realistic and Rich 3D Environment

International Conference on Learning Representations (ICLR)

Teaching an agent to navigate in an unseen 3D environment is a challenging task, even in the event of simulated environments. To generalize to unseen environments, an agent needs to be robust to low-level variations (e.g. color, texture, object changes), and also high-level variations (e.g. layout changes of the environment). To improve overall generalization, all types of variations in the environment have to be taken under consideration via different level of data augmentation steps.

By: Yi Wu, Yuxin Wu, Georgia Gkioxari, Yuandong Tian