Research Area
Year Published

89 Results

December 4, 2018

DeepFocus: Learned Image Synthesis for Computational Displays

ACM SIGGRAPH Asia 2018

In this paper, we introduce DeepFocus, a generic, end-to-end convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using only commonly available RGB-D images, enabling real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs.

By: Lei Xiao, Anton S. Kaplanyan, Alexander Fix, Matthew Chapman, Douglas Lanman

December 4, 2018

Realistic AR Makeup over Diverse Skin Tones on Mobile

SIGGRAPH ASIA

We propose a novel approach to the application of realistic makeup over a diverse set of skin tones in mobile phones using augmented reality.

By: Bruno Evangelista, Houman Meshkin, Helen Kim, Anaelisa Aburto, Ben Max Rubinstein, Andrea Ho
Areas: AR/VR

November 27, 2018

Deep Incremental Learning for Efficient High-Fidelity Face Tracking

ACM SIGGRAPH ASIA 2018

In this paper, we present an incremental learning framework for efficient and accurate facial performance tracking. Our approach is to alternate the modeling step, which takes tracked meshes and texture maps to train our deep learning-based statistical model, and the tracking step, which takes predictions of geometry and texture our model infers from measured images and optimize the predicted geometry by minimizing image, geometry and facial landmark errors.

By: Chenglei Wu, Takaaki Shiratori, Yaser Sheikh

November 1, 2018

Fast Depth Densification for Occlusion-aware Augmented Reality

SIGGRAPH Asia

Current AR systems only track sparse geometric features but do not compute depth for all pixels. For this reason, most AR effects are pure overlays that can never be occluded by real objects. We present a novel algorithm that propagates sparse depth to every pixel in near realtime.

By: Aleksander Holynski, Johannes Kopf

October 25, 2018

Peri-personal space as a prior in coupling visual and proprioceptive signals

Scientific Reports 2018

It has been suggested that the integration of multiple body-related sources of information within the peri-personal space (PPS) scaffolds body ownership. However, a normative computational framework detailing the functional role of PPS is still missing. Here we cast PPS as a visuo-proprioceptive Bayesian inference problem whereby objects we see in our environment are more likely to engender sensations as they come near to the body. We propose that PPS is the refection of such an increased a priori probability of visuo-proprioceptive coupling that surrounds the body.

By: Jean-Paul Noel, Majed Samad, Andrew Doxon, Justin Clark, Sean Keller, Massimiliano Di Luca
Areas: AR/VR

October 12, 2018

Conveying Language through Haptics: A Multi-sensory Approach

International Symposium on Wearable Computers 2018

We propose using a multi-sensory haptic device called MISSIVE, which can be worn on the upper arm and is capable of producing brief cues, sufficient in quantity to encode the full English phoneme set.

By: Nathan Dunkelberger, Jenny Sullivan, Joshua Bradley, Nickolas P Walling, Indu Manickam, Gautam Dasarathy, Ali Israr, Frances Lau, Keith Klumb, Brian Knott, Freddy Abnousi, Richard Baraniuk, Marcia K. O’Malley

October 1, 2018

The effects of natural scene statistics on text readability in additive displays

Human Factors and Ergonomics Society

The minimum contrast needed for optimal text readability with additive displays (e.g. AR devices) will depend on the spatial structure of the background and text. Natural scenes and text follow similar spectral patterns. Therefore, natural scenes can mask low contrast text – making it difficult to read. In a set of experiments, we determine the minimum viable contrast for readability on an additive display.

By: Daryn R. Blanc-Goldhammer, Kevin J. MacKenzie
Areas: AR/VR

September 9, 2018

DDRNet: Depth Map Denoising and Refinement for Consumer Depth Cameras Using Cascaded CNNs

European Conference on Computer Vision (ECCV)

Although plenty of progresses have been made to reduce the noises and boost geometric details, due to the inherent illness and the real-time requirement, the problem is still far from been solved. We propose a cascaded Depth Denoising and Refinement Network (DDRNet) to tackle this problem by leveraging the multi-frame fused geometry and the accompanying high quality color image through a joint training strategy.

By: Shi Yan, Chenglei Wu, Lizhen Wang, Feng Xu, Liang An, Kaiwen Guo, Yebin Liu

September 8, 2018

DeepWrinkles: Accurate and Realistic Clothing Modeling

European Conference on Computer Vision (ECCV)

We present a novel method to generate accurate and realistic clothing deformation from real data capture. Previous methods for realistic cloth modeling mainly rely on intensive computation of physics-based simulation (with numerous heuristic parameters), while models reconstructed from visual observations typically suffer from lack of geometric details.

By: Zorah Lähner, Daniel Cremers, Tony Tung

September 7, 2018

Recycle-GAN: Unsupervised Video Retargeting

European Conference on Computer Vision (ECCV)

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

By: Aayush Bansal, Shugao Ma, Deva Ramanan, Yaser Sheikh