Publication

Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline

European Conference on Computer Vision (ECCV)


Abstract

Prior work in visual dialog has focused on training deep neural models on VisDial in isolation. Instead, we present an approach to leverage pretraining on related vision-language datasets before transferring to visual dialog. We adapt the recently proposed ViLBERT model for multi-turn visually-grounded conversations. Our model is pretrained on the Conceptual Captions and Visual Question Answering datasets, and finetuned on VisDial. Our best single model outperforms prior published work by > 1% absolute on NDCG and MRR.

Next, we find that additional finetuning using “dense” annotations in VisDial leads to even higher NDCG – more than 10% over our base model – but hurts MRR – more than 17% below our base model! This highlights a trade-off between the two primary metrics – NDCG and MRR – which we find is due to dense annotations not correlating well with the original ground-truth answers to questions.

Related Publications

All Publications

MICCAI - October 5, 2020

Active MR k-space Sampling with Reinforcement Learning

Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, Michal Drozdzal

Multimodal Video Analysis Workshop at ECCV - August 23, 2020

Audio-Visual Instance Discrimination

Pedro Morgado, Nuno Vasconcelos, Ishan Misra

Interspeech - October 24, 2020

Efficient Wait-k Models for Simultaneous Machine Translation

Maha Elbayad, Laurent Besacier, Jakob Verbeek

ICML - November 3, 2020

Learning Near Optimal Policies with Low Inherent Bellman Error

Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy