Research Area
Year Published

286 Results

April 30, 2018

Identifying Analogies Across Domains

International Conference on Learning Representations (ICLR)

In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques.

By: Yedid Hoshen, Lior Wolf
April 30, 2018

Easing Non-Convex Optimization with Neural Networks

International Conference on Learning Representations (ICLR)

In this paper, we use a deep neural network with D parameters to parametrize the input space of a generic d-dimensional nonconvex optimization problem. Our experiments show that minimizing the over-parametrized D ≫ d variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.

By: David Lopez-Paz, Levent Sagun
April 30, 2018

Graph Attention Networks

International Conference on Learning Representations (ICLR)

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.

By: Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio
April 30, 2018

NAM – Unsupervised Cross-Domain Image Mapping without Cycles or GANs

International Conference on Learning Representations (ICLR)

In this work, we introduce an alternative method: NAM. NAM relies on a pre-trained generative model of the source domain, and aligns each target image with an image sampled from the source distribution while jointly optimizing the domain mapping function. Experiments are presented validating the effectiveness of our method.

By: Yedid Hoshen, Lior Wolf
April 30, 2018

Unsupervised Machine Translation Using Monolingual Corpora Only

International Conference on Learning Representations (ICLR)

Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data.

By: Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato
April 30, 2018

When is a Convolutional Filter Easy to Learn?

International Conference on Learning Representations (ICLR)

We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input.

By: Simon S. Du, Jason D. Lee, Yuandong Tian
April 30, 2018

Building Generalizable Agents with a Realistic and Rich 3D Environment

International Conference on Learning Representations (ICLR)

Teaching an agent to navigate in an unseen 3D environment is a challenging task, even in the event of simulated environments. To generalize to unseen environments, an agent needs to be robust to low-level variations (e.g. color, texture, object changes), and also high-level variations (e.g. layout changes of the environment). To improve overall generalization, all types of variations in the environment have to be taken under consideration via different level of data augmentation steps.

By: Yi Wu, Yuxin Wu, Georgia Gkioxari, Yuandong Tian
April 30, 2018

Residual Connections Encourage Iterative Inference

International Conference on Learning Representations (ICLR)

Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect.

By: Stanislaw Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, Yoshua Bengio
April 30, 2018

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

International Conference on Learning Representations (ICLR)

We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does…

By: Shiyu Liang, Yixuan Li, R. Srikant
April 30, 2018

Multi-Scale Dense Networks for Resource Efficient Image Classification

International Conference on Learning Representations (ICLR)

In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs.

By: Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Weinberger