Publication

Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization

International Conference on Machine Learning (ICML)


Abstract

When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy. In this work, we leverage recent methodological advances in Bayesian optimization over high-dimensional search spaces and multi-objective Bayesian optimization to efficiently explore these trade-offs for a production-scale on-device natural language understanding model at Facebook.

Related Publications

All Publications

Workshop on Online Abuse and Harms (WHOAH) at ACL - November 30, 2021

Findings of the WOAH 5 Shared Task on Fine Grained Hateful Memes Detection

Lambert Mathias, Shaoliang Nie, Bertie Vidgen, Aida Davani, Zeerak Waseem, Douwe Kiela, Vinodkumar Prabhakaran

Journal of Big Data - November 6, 2021

A graphical method of cumulative differences between two subpopulations

Mark Tygert

NeurIPS - December 6, 2021

Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

Samuel Daulton, Maximilian Balandat, Eytan Bakshy

UAI - July 27, 2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Awni Hannun, Chuan Guo, Laurens van der Maaten

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy