Research Area
Year Published

103 Results

March 26, 2019

The Impact of Avatar Tracking Errors on User Experience in VR

IEEE Conference on Virtual Reality

This paper presents a series of experiments employing a sizable subject pool (n=96) that study the impact of motion tracking errors on user experience for activities including social interaction and virtual object manipulation.

By: Nicholas Toothman, Michael Neff

March 23, 2019

The Effect of Hand Size and Interaction Modality on the Virtual Hand Illusion

IEEE Conference on Virtual Reality

In this paper, we consider how concepts related to the virtual hand illusion, user experience, and task efficiency are influenced by variations between the size of a user’s actual hand and their avatar’s hand.

By: Lorraine Lin, Aline Normoyle, Alexandra Adkins, Yu Sun, Andrew Robb, Yuting Ye, Massimiliano Di Luca, Sophie Jörg
Areas: AR/VR

February 16, 2019

Machine Learning at Facebook: Understanding Inference at the Edge

IEEE International Symposium on High-Performance Computer Architecture (HPCA)

This paper takes a data-driven approach to present the opportunities and design challenges faced by Facebook in order to enable machine learning inference locally on smartphones and other edge platforms.

By: Carole-Jean Wu, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat Dukhan, Kim Hazelwood, Eldad Isaac, Yangqing Jia, Bill Jia, Tommer Leyvand, Hao Lu, Yang Lu, Lin Qiao, Brandon Reagen, Joe Spisak, Fei Sun, Andrew Tulloch, Peter Vajda, Xiaodong Wang, Yanghan Wang, Bram Wasti, Yiming Wu, Ran Xian, Sungjoo Yoo, Peizhao Zhang

December 17, 2018

Compact Dielectric Elastomer Linear Actuators

Advanced Functional Materials 2018

The design and fabrication of a rolled dielectric elastomer actuator is described and the parametric dependence of the displacement and blocked force on the actuator geometry, elastomer layer thickness, voltage, and number of turns is analyzed.

By: Huichan Zhao, Aftab M. Hussain, Mihai Duduta, Daniel M. Vogt, Robert J. Wood, David R. Clarke
Areas: AR/VR

December 4, 2018

DeepFocus: Learned Image Synthesis for Computational Displays

ACM SIGGRAPH Asia 2018

In this paper, we introduce DeepFocus, a generic, end-to-end convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using only commonly available RGB-D images, enabling real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs.

By: Lei Xiao, Anton S. Kaplanyan, Alexander Fix, Matthew Chapman, Douglas Lanman

December 4, 2018

Realistic AR Makeup over Diverse Skin Tones on Mobile

SIGGRAPH ASIA

We propose a novel approach to the application of realistic makeup over a diverse set of skin tones in mobile phones using augmented reality.

By: Bruno Evangelista, Houman Meshkin, Helen Kim, Anaelisa Aburto, Ben Max Rubinstein, Andrea Ho
Areas: AR/VR

November 27, 2018

Deep Incremental Learning for Efficient High-Fidelity Face Tracking

ACM SIGGRAPH ASIA 2018

In this paper, we present an incremental learning framework for efficient and accurate facial performance tracking. Our approach is to alternate the modeling step, which takes tracked meshes and texture maps to train our deep learning-based statistical model, and the tracking step, which takes predictions of geometry and texture our model infers from measured images and optimize the predicted geometry by minimizing image, geometry and facial landmark errors.

By: Chenglei Wu, Takaaki Shiratori, Yaser Sheikh

November 1, 2018

Fast Depth Densification for Occlusion-aware Augmented Reality

SIGGRAPH Asia

Current AR systems only track sparse geometric features but do not compute depth for all pixels. For this reason, most AR effects are pure overlays that can never be occluded by real objects. We present a novel algorithm that propagates sparse depth to every pixel in near realtime.

By: Aleksander Holynski, Johannes Kopf

October 25, 2018

Peri-personal space as a prior in coupling visual and proprioceptive signals

Scientific Reports 2018

It has been suggested that the integration of multiple body-related sources of information within the peri-personal space (PPS) scaffolds body ownership. However, a normative computational framework detailing the functional role of PPS is still missing. Here we cast PPS as a visuo-proprioceptive Bayesian inference problem whereby objects we see in our environment are more likely to engender sensations as they come near to the body. We propose that PPS is the refection of such an increased a priori probability of visuo-proprioceptive coupling that surrounds the body.

By: Jean-Paul Noel, Majed Samad, Andrew Doxon, Justin Clark, Sean Keller, Massimiliano Di Luca
Areas: AR/VR

October 12, 2018

Conveying Language through Haptics: A Multi-sensory Approach

International Symposium on Wearable Computers 2018

We propose using a multi-sensory haptic device called MISSIVE, which can be worn on the upper arm and is capable of producing brief cues, sufficient in quantity to encode the full English phoneme set.

By: Nathan Dunkelberger, Jenny Sullivan, Joshua Bradley, Nickolas P Walling, Indu Manickam, Gautam Dasarathy, Ali Israr, Frances Lau, Keith Klumb, Brian Knott, Freddy Abnousi, Richard Baraniuk, Marcia K. O’Malley