Publication

#TagSpace: Semantic Embeddings from Hashtags

Empirical Methods in Natural Language Processing


Abstract

We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well.

To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.

Related Publications

All Publications

NeurIPS - December 5, 2021

Local Differential Privacy for Regret Minimization in Reinforcement Learning

Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta

NeurIPS - December 5, 2021

Hierarchical Skills for Efficient Exploration

Jonas Gehring, Gabriel Synnaeve, Andreas Krause, Nicolas Usunier

NeurIPS - December 5, 2021

Interpretable agent communication from scratch (with a generic visual processor emerging on the side)

Roberto Dessì, Eugene Kharitonov, Marco Baroni

Workshop on Online Abuse and Harms (WHOAH) at ACL - November 30, 2021

Findings of the WOAH 5 Shared Task on Fine Grained Hateful Memes Detection

Lambert Mathias, Shaoliang Nie, Bertie Vidgen, Aida Davani, Zeerak Waseem, Douwe Kiela, Vinodkumar Prabhakaran

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy