Publication

What Makes Training Multi-modal Classification Networks Hard?

Conference on Computer Vision and Pattern Recognition (CVPR)


Abstract

Consider end-to-end training of a multi-modal vs. a unimodal network on a task with multiple input modalities: the multi-modal network receives more information, so it should match or outperform its uni-modal counterpart. In our experiments, however, we observe the opposite: the best uni-modal network often outperforms the multi-modal network. This observation is consistent across different combinations of modalities and on different tasks and benchmarks for video classification.

This paper identifies two main causes for this performance drop: first, multi-modal networks are often prone to overfitting due to their increased capacity. Second, different modalities overfit and generalize at different rates, so training them jointly with a single optimization strategy is sub-optimal. We address these two problems with a technique we call Gradient-Blending, which computes an optimal blending of modalities based on their overfitting behaviors. We demonstrate that Gradient Blending outperforms widely-used baselines for avoiding overfitting and achieves state-of-the-art accuracy on various tasks including human action recognition, ego-centric action recognition, and acoustic event detection.

Related Publications

All Publications

COLING - December 8, 2020

Best Practices for Data-Efficient Modeling in NLG: How to Train Production-Ready Neural Models with Less Data

Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, Michael White

NeurIPS - December 1, 2020

Continuous Surface Embeddings

Natalia Neverova, David Novotny, Vasil Khalidov, Marc Szafraniec, Patrick Labatut, Andrea Vedaldi

NeurIPS - November 30, 2020

Adversarial Attacks on Linear Contextual Bandits

Evrard Garcelon, Baptiste Roziere, Laurent Meunier, Jean Tarbouriech, Olivier Teytaud, Alessandro Lazaric, Matteo Pirotta

NeurIPS - December 7, 2020

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

Shali Jiang, Daniel Jiang, Max Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy