Research Area
Year Published

300 Results

May 12, 2019

Provably Accelerated Randomized Gossip Algorithms

IEEE International Conference on Acoustics, Speech, and Signal Processing

In this work we present novel provably accelerated gossip algorithms for solving the average consensus problem.

By: Nicolas Loizou, Michael Rabbat, Peter Richtárik

May 4, 2019

Quasi-Hyperbolic Momentum and Adam for Deep Learning

International Conference on Learning Representations (ICLR)

Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. We describe numerous connections to and identities with other algorithms, and we characterize the set of two-state optimization algorithms that QHM can recover.

By: Jerry Ma, Denis Yarats

May 1, 2019

Learning graphs from data: A signal representation perspective

IEEE Signal Processing Magazine

In this tutorial overview, we survey solutions to the problem of graph learning, including classical viewpoints from statistics and physics, and more recent approaches that adopt a graph signal processing (GSP) perspective.

By: Xiaowen Dong, Dorina Thanou, Michael Rabbat, Pascal Frossard

January 30, 2019

Large-Scale Visual Relationship Understanding

AAAI Conference on Artificial Intelligence (AAAI)

Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved.

By: Ji Zhang, Yannis Kalantidis, Marcus Rohrbach, Manohar Paluri, Ahmed Elgammal, Mohamed Elhoseiny

January 30, 2019

Memorize or generalize? Searching for a compositional RNN in a haystack

IJCAI-ECAI Workshop: Architectures and Evaluation for Generality, Autonomy & Progress in AI

Neural networks are very powerful learning systems, but they do not readily generalize from one task to the other. This is partly due to the fact that they do not learn in a compositional way, that is, by discovering skills that are shared by different tasks, and recombining them to solve new problems. In this paper, we explore the compositional generalization capabilities of recurrent neural networks (RNNs).

By: Adam Liška, Germán Kruszewski, Marco Baroni

January 28, 2019

Combined Reinforcement Learning via Abstract Representations

AAAI Conference on Artificial Intelligence (AAAI)

In the quest for efficient and robust reinforcement learning methods, both model-free and model-based approaches offer advantages. In this paper we propose a new way of explicitly bridging both approaches via a shared low-dimensional learned encoding of the environment, meant to capture summarizing abstractions.

By: Vincent Francois-Lavet, Yoshua Bengio, Doina Precup, Joelle Pineau

January 18, 2019

On-line Adaptative Curriculum Learning for GANs

AAAI Conference on Artificial Intelligence (AAAI)

Generative Adversarial Networks (GANs) can successfully approximate a probability distribution and produce realistic samples. However, open questions such as sufficient convergence conditions and mode collapse still persist. In this paper, we build on existing work in the area by proposing a novel framework for training the generator against an ensemble of discriminator networks, which can be seen as a one-student/multiple-teachers setting. We formalize this problem within the full-information adversarial bandit framework, where we evaluate the capability of an algorithm to select mixtures of discriminators for providing the generator with feedback during learning.

By: Thang Doan, João Monteiro, Isabela Albuquerque, Bodgan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm

January 18, 2019

Spatially Invariant Unsupervised Object Detection with Convolutional Neural Networks

AAAI Conference on Artificial Intelligence (AAAI)

There are many reasons to expect an ability to reason in terms of objects to be a crucial skill for any generally intelligent agent. Indeed, recent machine learning literature is replete with examples of the benefits of object-like representations: generalization, transfer to new tasks, and interpretability, among others. However, in order to reason in terms of objects, agents need a way of discovering and detecting objects in the visual world – a task which we call unsupervised object detection.

By: Eric Crawford, Joelle Pineau

December 14, 2018

PyText: A seamless path from NLP research to production

We introduce PyText – a deep learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapid experimentation and of serving models at scale.

By: Ahmed Aly, Kushal Lakhotia, Shicong Zhao, Mrinal Mohit, Barlas Oğuz, Abhinav Arora, Sonal Gupta, Christopher Dewan, Stef Nelson-Lindall, Rushin Shah

December 11, 2018

The Costs of Overambitious Seeding of Social Products

International Conference on Complex Networks and their Applications

Product-adoption scenarios are often theoretically modeled as “influence-maximization” (IM) problems, where people influence one another to adopt and the goal is to find a limited set of people to “seed” so as to maximize long-term adoption. In many IM models, if there is no budgetary limit on seeding, the optimal approach involves seeding everybody immediately. Here, we argue that this approach can lead to suboptimal outcomes for “social products” that allow people to communicate with one another.

By: Shankar Iyer, Lada Adamic