Publication

FAIRSEQ: A Fast, Extensible Toolkit for Sequence Modeling

North American Chapter of the Association for Computational Linguistics (NAACL)


Abstract

FAIRSEQ is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs. A demo video can be found here: https://www.youtube.com/watch?v=OtgDdWtHvto.

Related Publications

All Publications

Interspeech - October 24, 2020

Efficient Wait-k Models for Simultaneous Machine Translation

Maha Elbayad, Laurent Besacier, Jakob Verbeek

ICASSP - May 11, 2019

Unsupervised Polyglot Text-To-Speech

Eliya Nachmani, Lior Wolf

Clinical NLP Workshop at EMNLP - November 12, 2020

Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art

Patrick Lewis, Myle Ott, Jingfei Du, Veslin Stoyanov

Interspeech - October 30, 2020

Interactive Text-to-Speech System via Joint Style Analysis

Yang Gao, Weiyi Zheng, Zhaojun Yang, Thilo Koehler, Christian Fuegen, Qing He

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy