AdaGrad stepsizes: sharp convergence over nonconvex landscapes

Journal of Machine Learning Research (JMLR)


Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine-tune the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization. We bridge this gap by providing theoretical guarantees for the convergence of AdaGrad for smooth, nonconvex functions. We show that the norm version of AdaGrad (AdaGrad-Norm) converges to a stationary point at the O(log (N)/ √ N) rate in the stochastic setting, and at the optimal O(1/N) rate in the batch (non-stochastic) setting – in this sense, our convergence guarantees are “sharp”. In particular, the convergence of AdaGrad-Norm is robust to the choice of all hyperparameters of the algorithm, in contrast to stochastic gradient descent whose convergence depends crucially on tuning the step-size to the (generally unknown) Lipschitz smoothness constant and level of stochastic noise on the gradient. Extensive numerical experiments are provided to corroborate our theoretical findings; moreover, the experiments suggest that the robustness of AdaGrad-Norm extends to the models in deep learning.

Related Publications

All Publications

Interspeech - October 12, 2021

LiRA: Learning Visual Speech Representations from Audio through Self-supervision

Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Björn W. Schuller, Maja Pantic

ICML - July 18, 2021

Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization

David Eriksson, Pierce I-Jen Chuang, Samuel Daulton, Peng Xia, Akshat Shrivastava, Arun Babu, Shicong Zhao, Ahmed Aly, Ganesh Venkatesh, Maximilian Balandat

IEEE Transactions on Image Processing Journal - March 9, 2021

Inspirational Adversarial Image Generation

Baptiste Rozière, Morgane Rivière, Olivier Teytaud, Jérémy Rapin, Yann LeCun, Camille Couprie

ICML - July 12, 2020

Lookahead-Bounded Q-Learning

Ibrahim El Shar, Daniel Jiang

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy