Research Area
Year Published

89 Results

January 1, 2020

Designing Safe Spaces for Virtual Reality

Ethics in Design and Communication

Virtual Reality (VR) designers accept the ethical responsibilities of removing a user’s entire world and superseding it with a fabricated reality. These unique immersive design challenges are intensified when virtual experiences become public and socially-driven. As female VR designers in 2018, we see an opportunity to fold the language of consent into the design practice of virtual reality—as a means to design safe, accessible, virtual spaces.

Publication will be made available in 2020.

By: Michelle Cortese, Andrea Zeller

November 10, 2019

An Integrated 6DoF Video Camera and System Design

SIGGRAPH Asia

Designing a fully integrated 360◦ video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment.

By: Albert Parra Pozo, Michael Toksvig, Terry Filiba Schrager, Joyce Hsu, Uday Mathur, Alexander Sorkine-Hornung, Richard Szeliski, Brian Cabral

September 5, 2019

C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion

International Conference on Computer Vision (ICCV)

We propose C3DPO, a method for extracting 3D models of deformable objects from 2D keypoint annotations in unconstrained images. We do so by learning a deep network that reconstructs a 3D object from a single view at a time, accounting for partial occlusions, and explicitly factoring the effects of viewpoint changes and object deformations.

By: David Novotny, Nikhila Ravi, Benjamin Graham, Natalia Neverova, Andrea Vedaldi

August 12, 2019

Efficient Segmentation: Learning Downsampling Near Semantic Boundaries

International Conference on Computer Vision (ICCV)

Many automated processes such as auto-piloting rely on a good semantic segmentation as a critical component. To speed up performance, it is common to downsample the input frame. However, this comes at the cost of missed small objects and reduced accuracy at semantic boundaries. To address this problem, we propose a new content-adaptive downsampling technique that learns to favor sampling locations near semantic boundaries of target classes.

By: Dmitrii Marin, Zijian He, Peter Vajda, Priyam Chatterjee, Sam Tsai, Fei Yang, Yuri Boykov

July 31, 2019

Neural Volumes: Learning Dynamic Renderable Volumes from Images

SIGGRAPH

To overcome memory limitations of voxel-based representations, we learn a dynamic irregular grid structure implemented with a warp field during ray-marching. This structure greatly improves the apparent resolution and reduces grid-like artifacts and jagged motion. Finally, we demonstrate how to incorporate surface-based representations into our volumetric-learning framework for applications where the highest resolution is required, using facial performance capture as a case in point.

By: Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh

July 12, 2019

The contributions of skin stretch and kinesthetic information to static weight perception

World Haptics

In this study, we examined the contributions of kinesthetic and skin stretch cues, in isolation and together, to the static perception of weight. In two psychophysical experiments, we asked participants either to detect on which hand a weight was presented or to compare between two weight cues. Two closed-loop controlled haptic devices were used to present weights with a precision of 0.05g to an end-effector held in a pinch grasp.

By: Femke E. van Beek, Raymond J. King, Casey Brown, Massimiliano Di Luca
Areas: AR/VR

July 12, 2019

VR Facial Animation via Multiview Image Translation

SIGGRAPH

In this work, we present a bidirectional system that can animate avatar heads of both users’ full likeness using consumer-friendly headset mounted cameras (HMC). There are two main challenges in doing this: unaccommodating camera views and the image-to-avatar domain gap. We address both challenges by leveraging constraints imposed by multiview geometry to establish precise image-to-avatar correspondence, which are then used to learn an end-to-end model for real-time tracking.

By: Shih-En Wei, Jason Saragih, Tomas Simon, Adam W. Harley, Stephen Lombardi, Michal Perdoch, Alexander Hypes, Dawei Wang, Hernan Badino, Yaser Sheikh

July 12, 2019

Spatiotemporal Haptic Effects from a Single Actuator via Spectral Control of Cutaneous Wave Propagation

IEEE World Haptics Conference

A key challenge in haptic engineering is to design methods for stimulating the skin – a continuous medium with infinitely many degrees of freedom – via practical devices with few degrees of freedom. Here, we show how to use a single actuator to generate tactile stimuli with dynamically controlled spatial extent

By: Bharat Dandu, Yitian Shao, Andrew Stanley, Yon Visell
Areas: AR/VR

July 12, 2019

Tasbi: Multisensory Squeeze and Vibrotactile Wrist Haptics for Augmented and Virtual Reality

World Haptics

In this work, we present Tasbi, a multisensory haptic wristband capable of delivering squeeze and vibrotactile feedback. The device features a novel mechanism for generating evenly distributed and purely normal squeeze forces around the wrist. Our approach ensures that Tasbi’s six radially spaced vibrotactors maintain position and exhibit consistent skin coupling.

By: Evan Pezent, Ali Israr, Majed Samad, Shea Robinson, Priyanshu Agarwal, Hrvoje Benko, Nick Colonnese
Areas: AR/VR

July 12, 2019

Z-Qualities and Renderable Mass-Damping-Stiffness Spaces: Describing the Set of Renderable Dynamics of Kinesthetic Haptic Displays

IEEE World Haptics Conference

In this paper we define language and definitions to define the renderable set of dynamics that a general kinesthetic haptic display can render to a human operator.

By: Nick Colonnese, Sonny Chan