On the Predictability of HRTFs from Ear Shapes Using Deep Networks

International Conference on Acoustics, Speech, and Signal Processing (ICASSP)


Head-Related Transfer Function (HRTF) individualization is critical for immersive and realistic spatial audio rendering in augmented/virtual reality. Neither measurements nor simulations using 3D scans of head/ear are scalable for practical applications. More efficient machine learning approaches are being explored recently, to predict HRTFs from ear images or anthropometric features. However, it is not yet clear whether such models can provide an alternative for direct measurements or high-fidelity simulations. Here, we aim to address this question. Using 3D ear shapes as inputs, we explore the bounds of HRTF predictability using deep neural networks. To that end, we propose and evaluate two models, and identify the lowest achievable spectral distance error when predicting the true HRTF magnitude spectra.

Related Publications

All Publications

SIGGRAPH - August 9, 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation

He Zhang, Yuting Ye, Takaaki Shiratori, Taku Komura

SIGGRAPH - August 9, 2021

Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports

Jungdam Won, Deepak Gopinath, Jessica Hodgins

CVPR - June 20, 2021

Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos

Yanghao Li, Tushar Nagarajan, Bo Xiong, Kristen Grauman

ICML - July 18, 2021

Align, then memorise: the dynamics of learning with feedback alignment

Maria Refinetti, St├ęphane d'Ascoli, Ruben Ohana, Sebastian Goldt

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy