All Research Areas
Research Areas
Year Published

388 Results

December 12, 2017

Supporting Diverse Dynamic Intent-based Policies using Janus

International Conference on emerging Networking EXperiments and Technologies (CoNEXT)

In this paper we propose Janus, a system which makes two major contributions to network policy abstractions. First, we extend the prior policy graph abstraction model to represent complex QoS and dynamic stateful/temporal policies. Second, we convert the policy configuration problem into an optimization problem with the goal of maximizing the number of satisfied and configured policies, and minimizing the number of path changes under dynamic environments.

By: Anubhavnidhi Abhashkumar, Joon-Myung Kang, Sujata Banerjee, Aditya Akella, Ying Zhang, Wenfei Wu
December 10, 2017

Social Structure and Trust in Massive Digital Markets

International Conference on Information Systems (ICIS)

In this paper we measure the extent to which situating transactions in networks can generate trust in online marketplaces with an empirical approach that provides external validity while eliminating many potential confounds.

By: David Holtz, Diana Lynn MacLean, Sinan Aral
December 4, 2017

Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples

Neural Information Processing Systems (NIPS)

We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable.

By: Moustapha Cisse, Yossi Adi, Natalia Neverova, Joseph Keshet
December 4, 2017

Attentive Explanations: Justifying Decisions and Pointing to the Evidence

Interpretable Machine Learning Symposium at NIPS

In this work, we emphasize the importance of model explanation in various forms such as visual pointing and textual justification.

By: Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Marcus Rohrbach
December 4, 2017

On the Optimization Landscape of Tensor Decompositions

Neural Information Processing Systems (NIPS)

In this paper, we analyze the optimization landscape of the random over-complete tensor decomposition problem, which has many applications in unsupervised learning, especially in learning latent variable models. In practice, it can be efficiently solved by gradient ascent on a non-convex objective.

By: Rong Ge, Tengyu Ma
December 4, 2017

Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model

Neural Information Processing Systems (NIPS)

We present a novel training framework for neural sequence models, particularly for grounded dialog generation.

By: Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, Dhruv Batra
December 4, 2017

Fader Networks: Manipulating Images by Sliding Attributes

Neural Information Processing Systems (NIPS)

This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space.

By: Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, Marc'Aurelio Ranzato
December 4, 2017

One-Sided Unsupervised Domain Mapping

Neural Information Processing Systems (NIPS)

In this work, we present a method of learning GAB without learning GBA. This is done by learning a mapping that maintains the distance between a pair of samples.

By: Sagie Benaim, Lior Wolf
December 4, 2017

ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games

Neural Information Processing Systems (NIPS)

In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research.

By: Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, Larry Zitnick
December 4, 2017

Unbounded Cache Model for Online Language Modeling with Open Vocabulary

Neural Information Processing Systems (NIPS)

In this paper, we propose an extension of continuous cache models, which can scale to larger contexts. In particular, we use a large scale non-parametric memory component that stores all the hidden activations seen in the past.

By: Edouard Grave, Moustapha Cisse, Armand Joulin