FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

Conference of the European Chapter of the Association for Computational Linguistics (EACL)

Abstract

Current models for Word Sense Disambiguation (WSD) struggle to disambiguate rare senses, despite reaching human performance on global WSD metrics. This stems from a lack of data for both modeling and evaluating rare senses in existing WSD datasets. In this paper, we introduce FEWS (Few-shot Examples of Word Senses), a new low-shot WSD dataset automatically extracted from example sentences in Wiktionary. FEWS has high sense coverage across different natural language domains and provides: (We use the term low-shot as an umbrella term for fewand zero-shot learning) a large training set that covers many more senses than previous datasets and (Wiktionary.com) a comprehensive evaluation set containing few- and zero-shot examples of a wide variety of senses. We establish baselines on FEWS with knowledge-based and neural WSD approaches and present transfer learning experiments demonstrating that models additionally trained with FEWS better capture rare senses in existing WSD datasets. Finally, we find humans outperform the best baseline models on FEWS, indicating that FEWS will support significant future work on low-shot WSD.


Supplementary Materials

Featured Publications