November 2, 2018
Do explanations make VQA models more predictable to a human?
Empirical Methods in Natural Language Processing (EMNLP)
A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable ‘explanations’ of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model – its responses as well as failures – more predictable to a human.
By: Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh
Facebook AI Research
Natural Language Processing & Speech