Easing Non-Convex Optimization with Neural Networks

International Conference on Learning Representations (ICLR)

By: David Lopez-Paz, Levent Sagun

Abstract

Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent. In this note, we use a deep neural network with D parameters to parametrize the input space of a generic d-dimensional nonconvex optimization problem. Our experiments show that minimizing the over-parametrized D ≫ d variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.