Research Area
Year Published

109 Results

December 15, 2019

Multi-Sensory Stimuli Improve Distinguishability of Cutaneous Haptic Cues

IEEE Transactions on Haptics

We present experimental results that demonstrate that rendering haptic cues with multi-sensory components—specifically, lateral skin stretch, radial squeeze, and vibrotactile stimuli—improved perceptual distinguishability in comparison to similar cues with all-vibrotactile components. These results support the incorporation of diverse stimuli, both vibrotactile and non-vibrotactile, for applications requiring large haptic cue sets.

By: Jennifer L. Sullivan, Nathan Dunkelberger, Joshua Bradley, Joseph Young, Ali Israr, Frances Lau, Keith Klumb, Freddy Abnousi, Marcia K. O’Malley
Areas: AR/VR

December 1, 2019

Efficient Representation and Sparse Sampling of Head-Related Transfer Functions Using Phase-Correction Based on Ear Alignment

IEEE Transactions on Audio, Speech, and Language Processing (TASLP)

In this paper, a new method for pre-processing HRTFs in order to reduce their effective order is presented. The method uses phase-correction based on ear alignment, by exploiting the dual-centering nature of HRTF measurements. In contrast to time-alignment, the phase-correction is performed parametrically, making it more robust to measurement noise. The SH order reduction and ensuing interpolation errors due to sparse sampling were analyzed for these two methods.

By: Zamir Ben-Hur, David Lou Alon, Ravish Mehra, Boaz Rafaely
Areas: AR/VR

November 18, 2019

DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos

ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia

Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we explore a novel foveated reconstruction method that employs the recent advances in generative adversarial neural networks.

By: Anton S. Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, Gizem Rufo

November 10, 2019

An Integrated 6DoF Video Camera and System Design

SIGGRAPH Asia

Designing a fully integrated 360◦ video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment.

By: Albert Parra Pozo, Michael Toksvig, Terry Filiba Schrager, Joyce Hsu, Uday Mathur, Alexander Sorkine-Hornung, Richard Szeliski, Brian Cabral

November 9, 2019

Harassment in Social Virtual Reality: Challenges for Platform Governance

Conference on Computer-Supported Cooperative Work and Social Computing (CSCW)

In immersive virtual reality (VR) environments, experiences of harassment can be exacerbated by features such as synchronous voice chat, heightened feelings of presence and embodiment, and avatar movements that can feel like violations of personal space (such as simulated touching or grabbing). Simultaneously, efforts to govern these developing spaces are made more complex by the distributed landscape of virtual reality applications and the dynamic nature of local community norms. To better understand this nascent social and psychological environment, we interviewed VR users (n=25) about their experiences with harassment, abuse, and discomfort in social VR.

By: Lindsay Blackwell, Nicole Ellison, Natasha Elliott-Deflo, Raz Schwartz

October 29, 2019

Talking With Hands 16.2M: A Large-Scale Dataset of Synchronized Body-Finger Motion and Audio for Conversational Motion Analysis and Synthesis

International Conference on Computer Vision (ICCV)

We present a 16.2 million frame (50 hour) multimodal dataset of two-person face-to-face spontaneous conversations. Our dataset features synchronized body and finger motion as well as audio data. To the best of our knowledge, it represents the largest motion capture and audio dataset of natural conversations to date.

By: Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S. Srinivasa, Yaser Sheikh

October 28, 2019

DenseRaC: Joint 3D Pose and Shape Estimation by Dense Render-and-Compare

International Conference on Computer Vision (ICCV)

We present DenseRaC, a novel end-to-end framework for jointly estimating 3D human pose and body shape from a monocular RGB image. Our two-step framework takes the body pixel-to-surface correspondence map (i.e., IUV map) as proxy representation and then performs estimation of parameterized human pose and shape.

By: Yuanlu Xu, Song-Chun Zhu, Tony Tung

October 28, 2019

Ray tracing 3D spectral scenes through human optics models

Journal of Vision

Scientists and engineers have created computations and made measurements that characterize the first steps of seeing. ISETBio software integrates such computations and data into an open-source software package. The initial ISETBio implementations modeled image formation (physiological optics) for planar or distant scenes. The ISET3d software described here extends that implementation, simulating image formation for three-dimensional scenes.

By: Trisha Lian, Kevin J. MacKenzie, David H. Brainard, Nicolas P. Cottaris, Brian A. Wandell
Areas: AR/VR

October 27, 2019

Habitat: A Platform for Embodied AI Research

International Conference on Computer Vision (ICCV)

We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation.

By: Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra

September 19, 2019

Virtual Grasping Feedback and the Virtual Hand Ownership

Symposium on Applied Perception (SAP)

In this study, we analyze the performance, user preference, and sense of ownership for eight virtual grasping visualizations. Six are classified as either a tracked hand visualization or an outer hand visualization. The tracked hand visualizations are those that allow the virtual hand to enter the object being grasped, whereas the outer hand visualizations do not, thereby simulating a realistic interaction.

By: Ryan Canales, Aline Normoyle, Yu Sun, Yuting Ye, Massimiliano Di Luca, Sophie Jörg
Areas: AR/VR