NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned



We review the EfficientQA competition from NeurIPS 20202 . The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing large, redundant, retrieval corpora or the parameters of large learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.

Related Publications

All Publications

Interspeech - August 31, 2021

slimIPL: Language-Model-Free Iterative Pseudo-Labeling

Tatiana Likhomanenko, Qiantong Xu, Jacob Kahn, Gabriel Synnaeve, Ronan Collobert

Interspeech - August 30, 2021

A Two-stage Approach to Speech Bandwidth Extension

Ju Lin, Yun Wang, Kaustubh Kalgaonkar, Gil Keren, Didi Zhang, Christian Fuegen

SIGDIAL - July 29, 2021

Getting to Production with Few-shot Natural Language Generation Models

Peyman Heidari, Arash Einolghozati, Shashank Jain, Soumya Batra, Lee Callender, Ankit Arun, Shawn Mei, Sonal Gupta, Pinar Donmez, Vikas Bhardwaj, Anuj Kumar, Michael White

ACL - August 2, 2021

Text-Free Image-to-Speech Synthesis Using Learned Segmental Units

Wei-Ning Hsu, David Harwath, Tyler Miller, Christopher Song, James Glass

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy