July 18, 2019
Tabula nearly rasa: Probing the linguistic knowledge of character-level neural language models trained on unsegmented text
Topology, Algebra and Categories in Logic (TACL)
Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multi-lingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed.
By: Michael Hahn, Marco Baroni
Facebook AI Research