Publication

Dynamic Meta-Embeddings for Improved Sentence Representations

Empirical Methods in Natural Language Processing (EMNLP)


Abstract

While one of the first steps in many NLP systems is selecting what pre-trained word embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce dynamic meta-embeddings, a simple yet effective method for the supervised learning of embedding ensembles, which leads to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new light on the usage of word embeddings in NLP systems.

Related Publications

All Publications

EACL - April 20, 2021

FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

Terra Blevins, Mandar Joshi, Luke Zettlemoyer

The Springer Series on Challenges in Machine Learning - December 12, 2019

The Second Conversational Intelligence Challenge (ConvAI2)

Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Jason Weston

ICLR - May 4, 2021

Combining Label Propagation and Simple Models Out-performs Graph Neural Networks

Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson

ICLR - May 3, 2021

Creative Sketch Generation

Songwei Ge, Vedanuj Goswami, Larry Zitnick, Devi Parikh

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy