All Research Areas
Research Areas
Year Published

34 Results

October 22, 2017

Transitive Invariance for Self-supervised Visual Representation Learning

International Conference on Computer Vision (ICCV)

In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.).

By: Xiaolong Wang, Kaiming He, Abhinav Gupta
October 22, 2017

Unsupervised Creation of Parameterized Avatars

International Conference on Computer Vision (ICCV)

We study the problem of mapping an input image to a tied pair consisting of a vector of parameters and an image that is created using a graphical engine from the vector of parameters. The mapping’s objective is to have the output image as similar as possible to the input image.

By: Lior Wolf, Yaniv Taigman, Adam Polyak
September 7, 2017

Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog

Conference on Empirical Methods in Natural Language Processing (EMNLP)

In this paper, using a Task & Talk reference game between two agents as a testbed, we present a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional.

By: Satwik Kottur, José M.F. Moura, Stefan Lee, Dhruv Batra
July 30, 2017

Low-Cost 360 Stereo Photography and Video Capture

SIGGRAPH 2017

In this work, we describe a method that takes images from two 360◦ spherical cameras and synthesizes an omni-directional stereo panorama with stereo in all directions. Our proposed method has a lower equipment cost than camera-ring alternatives, can be assembled with currently available off-the-shelf equipment, and is relatively small and light-weight compared to the alternatives.

By: Kevin Matzen, Michael Cohen, Bryce Evans, Johannes Kopf, Richard Szeliski
July 21, 2017

Densely Connected Convolutional Networks

CVPR 2017

In this paper, we embrace the observation that hat convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output, and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.

By: Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
July 21, 2017

Semantic Amodal Segmentation

CVPR 2017

Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition?

By: Yan Zhu, Yuandong Tian, Dimitris Mexatas, Piotr Dollar
July 21, 2017

Aggregated Residual Transformations for Deep Neural Networks

CVPR 2017

We present a simple, highly modularized network architecture for image classification.

By: Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, Kaiming He
July 21, 2017

Feature Pyramid Networks for Object Detection

CVPR 2017

In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost.

By: Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie
July 21, 2017

Learning Features by Watching Objects Move

CVPR 2017

This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation.

By: Deepak Pathak, Ross Girshick, Piotr Dollar, Trevor Darrell, Bharath Hariharan
July 21, 2017

Link the head to the “beak”: Zero Shot Learning from Noisy Text Description at Part Precision

CVPR 2017

In this paper, we study learning visual classifiers from unstructured text descriptions at part precision with no training images. We propose a learning framework that is able to connect text terms to its relevant parts and suppress connections to non-visual text terms without any part-text annotations. F

By: Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, Ahmed Elgammal