Publication

#TagSpace: Semantic Embeddings from Hashtags

Empirical Methods in Natural Language Processing


Abstract

We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well.

To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.

Related Publications

All Publications

LEEP: A New Measure to Evaluate Transferability of Learned Representations

Cuong V. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau

ICML - July 13, 2020

The Differentiable Cross-Entropy Method

Brandon Amos, Denis Yarats

ICML - July 12, 2020

Growing Action Spaces

Gregory Farquhar, Laura Gustafson, Zeming Lin, Shimon Whiteson, Nicolas Usunier, Gabriel Synnaeve

July 14, 2020

Stochastic Hamiltonian Gradient Methods for Smooth Games

Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, Ioannis Mitliagkas

ICML - July 12, 2020

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy