Research Area
Year Published

286 Results

June 18, 2018

Data Distillation: Towards Omni-Supervised Learning

Computer Vision and Pattern Recognition (CVPR)

We investigate omni-supervised learning, a special regime of semi-supervised learning in which the learner exploits all available labeled data plus internet-scale sources of unlabeled data.

By: Ilija Radosavovic, Piotr Dollar, Ross Girshick, Georgia Gkioxari, Kaiming He
June 18, 2018

Detecting and Recognizing Human-Object Interactions

Computer Vision and Pattern Recognition (CVPR)

In this paper, we address the task of detecting triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person – their pose, clothing, action – is a powerful cue for localizing the objects they are interacting with.

By: Georgia Gkioxari, Ross Girshick, Piotr Dollar, Kaiming He
June 18, 2018

Low-shot learning with large-scale diffusion

Computer Vision and Pattern Recognition (CVPR)

This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time.

By: Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou
June 18, 2018

DensePose: Dense Human Pose Estimation In The Wild

Computer Vision and Pattern Recognition (CVPR)

In this work we establish dense correspondences between an RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We gather dense correspondences for 50K persons appearing in the COCO dataset by introducing an efficient annotation pipeline. We then use our dataset to train CNN-based systems that deliver dense correspondence ‘in the wild’, namely in the presence of background, occlusions and scale variations.

By: Riza Alp Guler, Natalia Neverova, Iasonas Kokkinos
June 18, 2018

Stacked Latent Attention for Multimodal Reasoning

Computer Vision and Pattern Recognition (CVPR)

Attention has shown to be a pivotal development in deep learning and has been used for a multitude of multimodal learning tasks such as visual question answering and image captioning. In this work, we pinpoint the potential limitations to the design of a traditional attention model.

By: Haoqi Fan, Jiatong Zhou
June 18, 2018

Embodied Question Answering

Computer Vision and Pattern Recognition (CVPR)

We present a new AI task – Embodied Question Answering (EmbodiedQA) – where an agent is spawned at a random location in a 3D environment and asked a question (‘What color is the car?’). In order to answer, the agent must first intelligently navigate to explore the environment, gather necessary visual information through first-person (egocentric) vision, and then answer the question (‘orange’). 

By: Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra
June 18, 2018

Eye In-Painting with Exemplar Generative Adversarial Networks

Computer Vision and Pattern Recognition (CVPR)

This paper introduces a novel approach to in-painting where the identity of the object to remove or change is preserved and accounted for at inference time: Exemplar GANs (ExGANs). ExGANs are a type of conditional GAN that utilize exemplar information to produce high-quality, personalized in-painting results.

By: Brian Dolhansky, Cristian Canton Ferrer
June 18, 2018

Improving Landmark Localization with Semi-Supervised Learning

Computer Vision and Pattern Recognition (CVPR)

We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available.

By: Sina Honari, Pavlo Molchanov, Stephen Tyree, Pascal Vincent, Christopher Pal, Jan Kautz
June 18, 2018

Multimodal Explanations: Justifying Decisions and Pointing to the Evidence

Computer Vision and Pattern Recognition (CVPR)

Deep models that are both effective and explainable are desirable in many settings; prior explainable models have been unimodal, offering either image-based visualization of attention weights or text-based generation of post-hoc justifications. We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths.

By: Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Marcus Rohrbach
June 18, 2018

Don’t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering

Computer Vision and Pattern Recognition (CVPR)

A number of studies have found that today’s Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers.

By: Aishwarya Agrawal, Dhruv Batra, Devi Parikh, Aniruddha Kembhavi