Publication

The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations

ICLR 2016


Abstract

We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modeling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.

Related Publications

All Publications

NeurIPS - December 6, 2020

High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization

Qing Feng, Benjamin Letham, Hongzi Mao, Eytan Bakshy

Innovative Technology at the Interface of Finance and Operations - March 31, 2021

Market Equilibrium Models in Large-Scale Internet Markets

Christian Kroer, Nicolas E. Stier-Moses

Human Interpretability Workshop at ICML - July 17, 2020

Investigating Effects of Saturation in Integrated Gradients

Vivek Miglani, Bilal Alsallakh, Narine Kokhlikyan, Orion Reblitz-Richardson

ICASSP - June 6, 2021

Multi-Channel Speech Enhancement Using Graph Neural Networks

Panagiotis Tzirakis, Anurag Kumar, Jacob Donley

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy