Publication

Advances in Pre-Training Distributed Word Representations

Language Resources and Evaluation Conference (LREC)


Abstract

Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.

Related Publications

All Publications

AISTATS - April 13, 2021

Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors

Nikhil Mehta, Kevin J Liang, Vinay K Verma, Lawrence Carin

NeurIPS - December 6, 2020

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric

NeurIPS - December 7, 2020

Labelling unlabelled videos from scratch with multi-modal self-supervision

Yuki M. Asano, Mandela Patrick, Christian Rupprecht, Andrea Vedaldi

NeurIPS - December 7, 2020

Adversarial Example Games

Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy