Publication

Dialog without Dialog Data: Learning Visual Dialog Agents from VQA Data

Conference on Neural Information Processing Systems (NeurIPS)


Abstract

Can we develop visually grounded dialog agents that can efficiently adapt to new tasks without forgetting how to talk to people? Such agents could leverage a larger variety of existing data to generalize to new task, minimizing expensive data collection and annotation. In this work, we study a setting we call “Dialog without Dialog”, which requires agents to develop visually grounded dialog models that can adapt to new tasks without language level supervision. By factorizing intention and language, our model minimizes linguistic drift after fine-tuning for new tasks. We present qualitative results, automated metrics, and human studies that all show our model can adapt to new tasks and maintain language quality. Baselines either fail to perform well at new tasks or experience language drift, becoming unintelligible to humans. Code has been made available at: https://github.com/mcogswell/dialog_without_dialog.

Related Publications

All Publications

UAI - July 28, 2021

A Nonmyopic Approach to Cost-Constrained Bayesian Optimization

Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger

Journal of Big Data - July 19, 2021

Cumulative deviation of a subpopulation from the full population

Mark Tygert

NeurIPS - July 16, 2021

Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization

Geoff Pleiss, Martin Jankowiak, David Eriksson, Anil Damle, Jacob R. Gardner

ICML - July 19, 2021

Making Paper Reviewing Robust to Bid Manipulation Attacks

Ruihan Wu, Chuan Guo, Felix Wu, Rahul Kidambi, Laurens van der Maaten, Kilian Q. Weinberger

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy