July 22, 2020

Announcing the winners of the 2020 AI System Hardware/Software Co-Design request for proposals

By: Meta Research

In March 2020, Facebook launched the AI Systems Hardware/Software Co-Design request for proposals (RFP) at MLSys. This new research award opportunity is part of our continued goal of strengthening our ties with academics working in the wide range of AI hardware/algorithm co-design research. Today, we’re announcing the recipients of these research awards.
View RFPWe launched this RFP after the success of the 2019 RFP and the AI Systems Faculty Summit. This year, we were particularly interested in proposals related to any of the following areas:

  • Recommendation models
    • Compression, quantization, pruning techniques
    • Graph-based systems with implications on hardware (graph learning)
  • Hardware/software co-design for deep learning
    • Energy-efficient hardware architectures
    • Hardware efficiency–aware neural architecture search
    • Mixed-precision linear algebra and tensor-based frameworks
  • Distributed training
    • Software frameworks for efficient use of programmable hardware
    • Scalable communication-aware and data movement-aware algorithms
    • High-performance and fault-tolerant communication middleware
    • High-performance fabric topology and network transport for distributed training
  • Performance, programmability, and efficiency at data center scale
    • Machine learning–driven data access optimization (e.g., prefetching and caching)
    • Enabling large model deployment through intelligent memory and storage
    • Training un/self/semi-supervised models on large-scale video data sets

“We received 132 proposals from 74 universities, which was an increase from last year’s 88 proposals. It was a difficult task to select a few research awards from a large pool of high-quality proposals,” says Maxim Naumov, a Research Scientist working on AI system co-design at Facebook. “We believe that the winners will help advance the state-of-the-art in ML/DL system design. Thank you to all the researchers who took the time to submit a proposal, and congratulations to the award recipients.”

Research award recipients

Principal investigators are listed first unless otherwise noted.

Algorithm-systems co-optimization for near-data graph learning
Zhiru Zhang (Cornell University)

Analytical models for efficient data orchestration in DL workloads
Tze Meng Low (Carnegie Mellon University)

Efficient DNN training at scale: From algorithms to hardware
Gennady Pekhimenko (University of Toronto)

HW/SW co-design for real-time learning with memory augmented networks
Priyanka Raina, Burak Bartan, Haitong Li, and Mert Pilanci (Stanford University)

HW/SW co-design of next-generation training platforms for DLRMs
Tushar Krishna (Georgia Institute of Technology)

ML-driven hardware-software co-design for data access optimization
Sophia Shao and Seah Kim (University of California, Berkeley)

Rank-adaptive and low-precision tensorized training for DLRM
Zheng Zhang (University of California, Santa Barbara)

Scaling and accelerating distributed training with FlexFlow
Alexander Aiken (Stanford University)

Unsupervised training for large-scale video representation learning
Avideh Zakhor (University of California, Berkeley)