Publication

Easing Non-Convex Optimization with Neural Networks

International Conference on Learning Representations (ICLR)


Abstract

Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent. In this note, we use a deep neural network with D parameters to parametrize the input space of a generic d-dimensional nonconvex optimization problem. Our experiments show that minimizing the over-parametrized D ≫ d variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.

Related Publications

All Publications

MICCAI - October 5, 2020

Active MR k-space Sampling with Reinforcement Learning

Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, Michal Drozdzal

Multimodal Video Analysis Workshop at ECCV - August 23, 2020

Audio-Visual Instance Discrimination

Pedro Morgado, Nuno Vasconcelos, Ishan Misra

ICML - November 3, 2020

Learning Near Optimal Policies with Low Inherent Bellman Error

Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

AISTATS - November 3, 2020

A single algorithm for both restless and rested rotting bandits

Julien Seznec, Pierre Menard, Alessandro Lazaric, Michal Valko

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy