Publication

wav2vec: Unsupervised Pre-training for Speech Recognition

Interspeech


Abstract

We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.

Related Publications

All Publications

Interspeech - August 31, 2021

slimIPL: Language-Model-Free Iterative Pseudo-Labeling

Tatiana Likhomanenko, Qiantong Xu, Jacob Kahn, Gabriel Synnaeve, Ronan Collobert

Interspeech - August 30, 2021

A Two-stage Approach to Speech Bandwidth Extension

Ju Lin, Yun Wang, Kaustubh Kalgaonkar, Gil Keren, Didi Zhang, Christian Fuegen

SIGDIAL - July 29, 2021

Getting to Production with Few-shot Natural Language Generation Models

Peyman Heidari, Arash Einolghozati, Shashank Jain, Soumya Batra, Lee Callender, Ankit Arun, Shawn Mei, Sonal Gupta, Pinar Donmez, Vikas Bhardwaj, Anuj Kumar, Michael White

ACL - August 2, 2021

Text-Free Image-to-Speech Synthesis Using Learned Segmental Units

Wei-Ning Hsu, David Harwath, Tyler Miller, Christopher Song, James Glass

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy