Publication

Temporally Coherent Completion of Dynamic Video

SIGGRAPH ASIA


Abstract

We present an automatic video completion algorithm that synthesizes missing regions in videos in a temporally coherent fashion. Our algorithm can handle dynamic scenes captured using a moving camera. State-of-the-art approaches have difficulties handling such videos because viewpoint changes cause image-space motion vectors in the missing and known regions to be inconsistent. We address this problem by jointly estimating optical flow and color in the missing regions. Using pixel-wise forward/backward flow fields enables us to synthesize temporally coherent colors. We formulate the problem as a non-parametric patch-based optimization. We demonstrate our technique on numerous challenging videos

Related Publications

All Publications

NeurIPS - December 1, 2020

Continuous Surface Embeddings

Natalia Neverova, David Novotny, Vasil Khalidov, Marc Szafraniec, Patrick Labatut, Andrea Vedaldi

NeurIPS - December 4, 2020

Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases

Senthil Purushwalkam, Abhinav Gupta

3DV - November 25, 2020

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins

CVPR - November 9, 2020

One-Shot Domain Adaptation For Face Generation

Chao Yang, Ser Nam Lim

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy