Jacob Andreas is a third-year PhD student at UC Berkeley, working with Dan Klein. His research focuses on models for grounded language learning that link natural language to perception and action. He received a B.S. from Columbia in 2012 and an M.Phil. from Cambridge in 2013.

Research Summary

Humans use language to communicate beliefs about the state of the world, describe solutions to problems, and more generally to build abstractions out of the primitive elements of sensation and motion. In order for automated language processing techniques to capture the meaning of language, and not simply its structure, they must also learn to relate language to the world—as it can be perceived in photographs and databases, and manipulated with robotic arms and web APIs.

Jacob’s current research aims to develop compositional question answering models capable of reasoning about a wide variety of information sources, including both natural images and structured knowledge bases. This is motivated by a broader interest in hybrid neural / formal systems which combine the advantages of deep representations and discrete linguistic structure. Other recent projects include models for translating instructions to planning heuristics and control policies, and a theoretically motivated fast approximate query scheme for large log-linear models.