People

Abdelrahman Mohamed

Research Scientist

I’m a research scientist at Facebook AI Research (FAIR) since 2018. My research interests lie in deep learning, speech, and natural language processing. Before joining FAIR I worked at Amazon Alexa and Microsoft Research. I received my PhD from the CS department at the University of Toronto working with Gerald Penn and Geoffrey Hinton. My paper “Convolutional Neural Networks for Speech Recognition” received the IEEE Signal Processing Society Best Paper Award for 2016.

Interests

Speech recognition, deep learning, natural language processing

Latest Publications

IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) - December 13, 2021

Kaizen: Continuously Improving Teacher Using Exponential Moving Average For Semi-supervised Speech Recognition

Vimal Manohar, Tatiana Likhomanenko, Qiantong Xu, Wei-Ning Hsu, Ronan Collobert, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed

Interspeech - September 3, 2021

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations

Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux

Interspeech - August 30, 2021

SUPERB: Speech Understanding and PERformance Benchmark

Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Daniel Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Godic Lee, Darong Liu, Zili Huang, Annie Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, Hung-yi Lee

NeurIPS - December 6, 2020

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli

ACL - July 8, 2020

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

ICASSP - May 4, 2020

Effectiveness of self-supervised pre-training for ASR

Alexei Baevski, Abdelrahman Mohamed

ICASSP - May 4, 2020

Transformer-based Acoustic Modeling for Hybrid Speech Recognition

Yongqiang Wang, Abdelrahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, Christian Fuegen, Geoffrey Zweig, Michael L. Seltzer

ICASSP - May 4, 2020

Libri-light: A benchmark for ASR with limited or no supervision

Jacob Kahn, Morgan Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, Tatiana Likhomanenko, Gabriel Synnaeve, Armand Joulin, Abdelrahman Mohamed, Emmanuel Dupoux

ICASSP - May 4, 2020

Training ASR models by Generation of Contextual Information

Kritika Singh, Dmytro Okhonko, Jun Liu, Yongqiang Wang, Frank Zhang, Ross Girshick, Sergey Edunov, Fuchun Peng, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed