Publication

Embodied Question Answering

Computer Vision and Pattern Recognition (CVPR)


Abstract

We present a new AI task – Embodied Question Answering (EmbodiedQA) – where an agent is spawned at a random location in a 3D environment and asked a question (‘What color is the car?’). In order to answer, the agent must first intelligently navigate to explore the environment, gather necessary visual information through first-person (egocentric) vision, and then answer the question (‘orange’). EmbodiedQA requires a range of AI skills – language understanding, visual recognition, active perception, goal driven navigation, commonsense reasoning, long-term memory, and grounding language into actions. In this work, we develop a dataset of questions and answers in House3D environments, evaluation metrics, and a hierarchical model trained with imitation and reinforcement learning.

Related Publications

All Publications

EACL - April 20, 2021

FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

Terra Blevins, Mandar Joshi, Luke Zettlemoyer

The Springer Series on Challenges in Machine Learning - December 12, 2019

The Second Conversational Intelligence Challenge (ConvAI2)

Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Jason Weston

ICLR - May 4, 2021

Combining Label Propagation and Simple Models Out-performs Graph Neural Networks

Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson

ICLR - May 3, 2021

Creative Sketch Generation

Songwei Ge, Vedanuj Goswami, Larry Zitnick, Devi Parikh

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy