Publication

Slim DensePose: Thrifty Learning from Sparse Annotations and Motion Cues

Conference Computer Vision and Pattern Recognition (CVPR)


Abstract

DensePose supersedes traditional landmark detectors by densely mapping image pixels to body surface coordinates. This power, however, comes at a greatly increased annotation time, as supervising the model requires to manually label hundreds of points per pose instance. In this work, we thus seek methods to significantly slim down the DensePose annotations, proposing more efficient data collection strategies. In particular, we demonstrate that if annotations are collected in video frames, their efficacy can be multiplied for free by using motion cues. To explore this idea, we introduce DensePose-Track, a dataset of videos where selected frames are annotated in the traditional DensePose manner. Then, building on geometric properties of the DensePose mapping, we use the video dynamic to propagate ground-truth annotations in time as well as to learn from Siamese equivariance constraints. Having performed exhaustive empirical evaluation of various data annotation and learning strategies, we demonstrate that doing so can deliver significantly improved pose estimation results over strong baselines. However, despite what is suggested by some recent works, we show that merely synthesizing motion patterns by applying geometric transformations to isolated frames is significantly less effective, and that motion cues help much more when they are extracted from videos.

Related Publications

All Publications

IEEE TSE - February 17, 2021

Machine Learning Testing: Survey, Landscapes and Horizons

Jie M. Zhang, Mark Harman, Lei Ma, Yang Liu

AISTATS - April 13, 2021

Multi-armed Bandits with Cost Subsidy

Deeksha Sinha, Karthik Abinav Sankararaman, Abbas Kazerouni, Vashist Avadhanula

CVPR - June 19, 2021

SimPoE: Simulated Character Control for 3D Human Pose Estimation

Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, Jason Saragih

ICLR - May 3, 2021

Support-Set Bottlenecks for Video-Text Representation Learning

Mandela Patrick, Po-Yao Huang, Florian Metze, Andrea Vedaldi, Alexander Hauptmann, Yuki M. Asano, João Henriques

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy