A Simulation-based Framework for Characterizing Predictive Distributions for Deep Learning

ICML Workshop on Uncertainty and Robustness in Deep Learning


Characterizing the confidence of machine learning predictions unlocks models that know when they do not know. In this study, we propose a framework for assessing the quality of predictive distributions obtained using deep learning models. The framework enables representation of aleatory and epistemic uncertainty, and relies on simulated data to generate different sources of uncertainty. Finally, it enables quantitative evaluation of the performance of uncertainty estimation techniques. We demonstrate the proposed framework with a case study highlighting the insights one can gain from using this framework.

Related Publications

All Publications

Robust Market Equilibria with Uncertain Preferences

Riley Murray, Christian Kroer, Alex Peysakhovich, Parikshit Shah

AAAI - February 12, 2020

Machine Learning in Compilers: Past, Present, and Future

Hugh Leather, Chris Cummins

FDL - September 14, 2020

Unsupervised Cross-Domain Singing Voice Conversion

Adam Polyak, Lior Wolf, Yossi Adi, Yaniv Taigman

Interspeech - August 8, 2020

TTS Skins: Speaker Conversion via ASR

Adam Polyak, Lior Wolf, Yaniv Taigman

Interspeech - August 9, 2020

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy