Publication

On the Ineffectiveness of Variance Reduced Optimization for Deep Learning

Neural Information Processing Systems (NeurIPS)


Abstract

The application of stochastic variance reduction to optimization has shown remarkable recent theoretical and practical success. The applicability of these techniques to the hard non-convex optimization problems encountered during training of modern deep neural networks is an open problem. We show that naive application of the SVRG technique and related approaches fail, and explore why.

Related Publications

All Publications

NeurIPS - October 22, 2020

Re-Examining Linear Embeddings for High-dimensional Bayesian Optimization

Benjamin Letham, Roberto Calandra, Akshara Rai, Eytan Bakshy

Journal of Machine Learning Research (JMLR) - September 30, 2019

Bayesian Optimization for Policy Search via Online-Offline Experimentation

Benjamin Letham, Eytan Bakshy

International Workshop on Mutation Analysis at ICST - May 6, 2021

An Empirical Comparison of Mutant Selection Assessment Metrics

Jie M. Zhang, Lingming Zhang, Dan Hao, Lu Zhang, Mark Harman

AISTATS - April 13, 2021

Aligning Time Series on Incomparable Spaces

Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy