Publication

Explicit Clothing Modeling for an Animatable Full-Body Avatar

arXiv


Abstract

Recent work has shown great progress in building photorealistic animatable full-body codec avatars, but these avatars still face difficulties in generating high-fidelity animation of clothing. To address the difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos. We use a two-layer mesh representation to separately register the 3D scans with templates. In order to improve the photometric correspondence across different frames, texture alignment is then performed through inverse rendering of the clothing geometry and texture predicted by a variational autoencoder. We then train a new two-layer codec avatar with separate modeling of the upper clothing and the inner body layer. To learn the interaction between the body dynamics and clothing states, we use a temporal convolution network to predict the clothing latent code based on a sequence of input skeletal poses. We show photorealistic animation output for three different actors, and demonstrate the advantage of our clothed-body avatars over single-layer avatars in the previous work. We also show the benefit of an explicit clothing model which allows the clothing texture to be edited in the animation output.

Related Publications

All Publications

ISMAR - July 29, 2021

Instant Visual Odometry Initialization for Mobile AR

Alejo Concha, Michael Burri, Jesus Briales, Christian Forster, Luc Oth

ICSA - November 6, 2019

Auralization systems for simulation of augmented reality experiences in virtual environments

Peter Dodds, Sebastià V. Amengual Garí, W. Owen Brimijoin, Philip W. Robinson

Journal of the Audio Engineering Society - July 20, 2021

Six-Degrees-of-Freedom Parametric Spatial Audio Based on One Monaural Room Impulse Response

Johannes M. Arend, Sebastià V. Amengual Garí, Carl Schissler, Florian Klein, Philip W. Robinson

ACM Transactions on Applied Perception Journal (ACM TAP) - September 16, 2021

Evaluating Grasping Visualizations and Control Modes in a VR Game

Alex Adkins, Lorraine Lin, Aline Normoyle, Ryan Canales, Yuting Ye, Sophie Jörg

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy