Alexei Baevski

Research Engineer

I work at Facebook AI Research on NLP problems and toolkits such as as Fairseq. Previously, I worked on Facebook Search, specifically on spell correction and query suggestion problems. Before that, I was at a financial risk management start-up where I worked as an engineer and a manager. I hold a Bachelor’s degree in computer science from University of Toronto.


NLP on its own and combined with other fields like vision (e.g. visual question answering), conversational systems (chatbots, conversational search, assistants), recommender systems and performance optimizations for existing algorithms

Latest Publications

NeurIPS - December 6, 2020

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli

ICASSP - May 4, 2020

Effectiveness of self-supervised pre-training for ASR

Alexei Baevski, Abdelrahman Mohamed

ICLR - April 27, 2020

vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations

Alexei Baevski, Steffen Schneider, Michael Auli

EMNLP - November 7, 2019

Cloze-driven Pretraining of Self-attention Networks

Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli

Interspeech - September 15, 2019

wav2vec: Unsupervised Pre-training for Speech Recognition

Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli

ICLR - June 3, 2019

Pay less attention with Lightweight and Dynamic Convolutions

Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, Michael Auli

NAACL - June 3, 2019

FAIRSEQ: A Fast, Extensible Toolkit for Sequence Modeling

Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli

ICLR - May 6, 2019

Adaptive Input Representations for Neural Language Modeling

Alexei Baevski, Michael Auli