Publication

Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion

Empirical Methods in Natural Language Processing (EMNLP)


Abstract

Continuous word representations learned separately on distinct languages can be aligned so that their words become comparable in a common space. Existing works typically solve a least-square regression problem to learn a rotation aligning a small bilingual lexicon, and use a retrieval criterion for inference. In this paper, we propose an unified formulation that directly optimizes a retrieval criterion in an end-to-end fashion. Our experiments on standard benchmarks show that our approach outperforms the state of the art on word translation, with the biggest improvements observed for distant language pairs such as English-Chinese.

Related Publications

All Publications

Interspeech - October 24, 2020

Efficient Wait-k Models for Simultaneous Machine Translation

Maha Elbayad, Laurent Besacier, Jakob Verbeek

ICASSP - May 11, 2019

Unsupervised Polyglot Text-To-Speech

Eliya Nachmani, Lior Wolf

Clinical NLP Workshop at EMNLP - November 12, 2020

Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art

Patrick Lewis, Myle Ott, Jingfei Du, Veslin Stoyanov

Interspeech - October 30, 2020

Interactive Text-to-Speech System via Joint Style Analysis

Yang Gao, Weiyi Zheng, Zhaojun Yang, Thilo Koehler, Christian Fuegen, Qing He

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy