I’m a research scientist at Facebook AI Research (FAIR) since 2018. My research interests lie in deep learning, speech, and natural language processing. Before joining FAIR I worked at Amazon Alexa and Microsoft Research. I received my PhD from the CS department at the University of Toronto working with Gerald Penn and Geoffrey Hinton. My paper “Convolutional Neural Networks for Speech Recognition” received the IEEE Signal Processing Society Best Paper Award for 2016.
Interests
Speech recognition, deep learning, natural language processing
Latest Publications
ACL - July 8, 2020
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer
ICASSP - May 4, 2020
Transformer-based Acoustic Modeling for Hybrid Speech Recognition
Yongqiang Wang, Abdelrahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, Christian Fuegen, Geoffrey Zweig, Michael L. Seltzer
ICASSP - May 4, 2020
Libri-light: A benchmark for ASR with limited or no supervision
Jacob Kahn, Morgan Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, Tatiana Likhomanenko, Gabriel Synnaeve, Armand Joulin, Abdelrahman Mohamed, Emmanuel Dupoux
ICASSP - May 4, 2020
Training ASR models by Generation of Contextual Information
Kritika Singh, Dmytro Okhonko, Jun Liu, Yongqiang Wang, Frank Zhang, Ross Girshick, Sergey Edunov, Fuchun Peng, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed
ICASSP - May 4, 2020
Effectiveness of self-supervised pre-training for ASR
Alexei Baevski, Abdelrahman Mohamed