Publication

Vid2Game: Controllable Characters Extracted from Real-World Videos

International Conference on Learning Representations (ICLR)


Abstract

We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving
body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person.

The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.

Related Publications

All Publications

IEEE TSE - February 17, 2021

Machine Learning Testing: Survey, Landscapes and Horizons

Jie M. Zhang, Mark Harman, Lei Ma, Yang Liu

AISTATS - April 13, 2021

Multi-armed Bandits with Cost Subsidy

Deeksha Sinha, Karthik Abinav Sankararaman, Abbas Kazerouni, Vashist Avadhanula

CVPR - June 19, 2021

SimPoE: Simulated Character Control for 3D Human Pose Estimation

Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, Jason Saragih

ICLR - May 3, 2021

Support-Set Bottlenecks for Video-Text Representation Learning

Mandela Patrick, Po-Yao Huang, Florian Metze, Andrea Vedaldi, Alexander Hauptmann, Yuki M. Asano, João Henriques

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy