Publication

Alternative Structures for Character-Level RNNs

ArXiv PrePrint


Abstract

Recurrent neural networks are convenient and efficient models for language modeling. However, when applied on the level of characters instead of words, they suffer from several problems. In order to successfully model long-term dependencies, the hidden representation needs to be large. This in turn implies higher computational costs, which can become prohibitive in practice. We propose two alternative structural modifications to the classical RNN model. The first one consists on conditioning the character level representation on the previous word representation. The other one uses the character history to condition the output probability. We evaluate the performance of the two proposed modifications on challenging, multi-lingual real world data.

Related Publications

All Publications

MICCAI - October 5, 2020

Active MR k-space Sampling with Reinforcement Learning

Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, Michal Drozdzal

Multimodal Video Analysis Workshop at ECCV - August 23, 2020

Audio-Visual Instance Discrimination

Pedro Morgado, Nuno Vasconcelos, Ishan Misra

ICML - November 3, 2020

Learning Near Optimal Policies with Low Inherent Bellman Error

Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

AISTATS - November 3, 2020

A single algorithm for both restless and rested rotting bandits

Julien Seznec, Pierre Menard, Alessandro Lazaric, Michal Valko

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy