On the Ineffectiveness of Variance Reduced Optimization for Deep Learning

Neural Information Processing Systems (NeurIPS)


The application of stochastic variance reduction to optimization has shown remarkable recent theoretical and practical success. The applicability of these techniques to the hard non-convex optimization problems encountered during training of modern deep neural networks is an open problem. We show that naive application of the SVRG technique and related approaches fail, and explore why.

Related Publications

All Publications

Uncertainty and Robustness in Deep Learning Workshop at ICML - June 24, 2021

DAIR: Data Augmented Invariant Regularization

Tianjian Huang, Chinnadhurai Sankar, Pooyan Amini, Satwik Kottur, Alborz Geramifard, Meisam Razaviyayn, Ahmad Beirami

AutoML Workshop at NeurIPS - July 18, 2021

Neural Fixed-Point Acceleration for Convex Optimization

Shobha Venkataraman, Brandon Amos

Federated Learning for User Privacy and Data Confidentiality Workshop At ICML - July 24, 2021

Federated Learning with Buffered Asynchronous Aggregation

John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek, Dzmitry Huba

UAI - July 28, 2021

A Nonmyopic Approach to Cost-Constrained Bayesian Optimization

Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy