May 26, 2020

Announcing the winners of the Towards On-Device AI research awards

By: Meta Research

In December 2019, Facebook launched the Towards On-Device AI request for proposals (RFP). The purpose of this RFP was to support the academic community in addressing fundamental challenges in this research area, to accelerate the transition toward a truly “smart” world where AI capabilities permeate all devices and sensors. Today, we’re announcing the winners of these research awards.


View RFP

“We’ve seen strong progress in moving AI workloads from the cloud to on-device. Running models locally has already helped drive new capabilities like speech assistants, night modes on cameras, and an entirely new class of intelligent devices like smartwatches and smart thermostats,” says Vikas Chandra, Director of AI Research. “This is important to push further to preserve privacy, latency, and compute power, and to help create even more experiences that can be useful to us in everyday life.”

Models must be capable of constantly learning, adapting, and providing proactive assistance. However, today’s smart devices run much of the computation on the cloud (or on a remote host), costing them transmission power and response latency as well as raising potential privacy questions. This limits their ability to provide a compelling user experience and realize the true potential of an “AI-everywhere” world.

For this RFP, we solicited proposals on a wide range of topics related to efficient on-device AI systems, including, but not limited to, the following:

  • Extending on-device capabilities for vision, audio, speech, and natural language processing
  • Distributing AI capabilities across the whole system stack from data capture at the edge to the cloud instead of performing all the compute in the cloud
  • Machine learning techniques to optimize system tasks such as compression, scheduling, and caching
  • On-device privacy-preserving learning
  • Efficient machine learning models for edge devices
  • Dynamic neural networks, such as mixture-of-expert networks
  • Platform-aware model optimization
  • Efficient hardware accelerator design
  • Tools for architecture modeling, design space exploration, and algorithm mapping
  • Efficient model execution, such as scheduling and tiling, on edge devices
  • Emerging technologies, such as near-sensor, near-memory, and near-storage computing

We received 161 proposals from more than 111 universities. Thank you to all the researchers who took the time to submit a proposal, and congratulations to the award recipients.

Research award winners

Principal investigators are listed first unless otherwise noted.

Distributed architectures for deep learning with application to mapping and localization
Avideh Zakhor (University of California, Berkeley)

Hardware/software co-exploration of multi-modal neural architectures
Yiyu Shi (University of Notre Dame)

Learning dynamic multimodal inference for on-device VQA
Emma Strubell, Yonatan Bisk (Carnegie Mellon University)

Multiscale synthesis of tensor programs
Rastislav Bodik (University of Washington)

On-device efficient machine learning for visual and textual data
Hannaneh Hajishirzi (University of Washington)

Private federated learning: Differential privacy in heterogeneous networks
Virginia Smith, Ameet Talwalkar (Carnegie Mellon University)

Smart speech interface based on non-autoregressive end-to-end model
Shinji Watanabe (Johns Hopkins University)

Visual recognition on the edge through platform-aware model optimization
Vicente Ordonez (University of Virginia)