Auralization systems for simulation of augmented reality experiences in virtual environments

International Conference on Spatial Audio (ICSA)


Augmented reality has the potential to connect people anywhere, anytime, and provide them with interactive virtual objects that enhance their lives. To deliver contextually appropriate audio for these experiences, a much greater understanding of how users will interact with augmented content and each other is needed. This contribution presents a system for evaluating human behavior and augmented reality device performance in calibrated synthesized environments. The system consists of a spherical loudspeaker array capable of spatial audio reproduction in a noise isolated and acoustically dampened room. The space is equipped with motion capture systems that track listener position, orientation, and eye gaze direction in temporal synchrony with audio playback and capture to allow for interactive control over the acoustic environment. In addition to spatial audio content from the loudspeaker array, supplementary virtual objects can be presented to listeners using motion-tracked unoccluding headphones. The system facilitates a wide array of studies relating to augmented reality research including communication ecology, spatial hearing, room acoustics, and device performance. System applications and configuration, calibration, processing, and validation routines are presented.

Related Publications

All Publications

3DV - November 18, 2021

Recovering Real-World Reflectance Properties and Shading From HDR Imagery

Bjoern Haefner, Simon Green, Alan Oursland, Daniel Andersen, Michael Goesele, Daniel Cremers, Richard Newcombe, Thomas Whelan

ICCV - October 11, 2021

ARCH++: Animation-Ready Clothed Human Reconstruction Revisited

Tong He, Yuanlu Xu, Shunsuke Saito, Stefano Soatto, Tony Tung

ICASSP - June 10, 2021

The Far-Field Equatorial Array for Binaural Rendering

Jens Ahrens, Hannes Helmholz, David Lou Alon, Sebastià V. Amengual Garí

ICCV - October 4, 2021

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes

Kai-En Lin, Lei Xiao, Feng Liu, Guowei Yang, Ravi Ramamoorthi

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy