Lookahead-Bounded Q-Learning

International Conference on Machine Learning (ICML)

Abstract

We introduce the lookahead-bounded Q-learning (LBQL) algorithm, a new, provably convergent variant of Q-learning that seeks to improve the performance of standard Q-learning in stochastic environments through the use of “lookahead” upper and lower bounds. To do this, LBQL employs previously collected experience and each iteration’s state-action values as dual feasible penalties to construct a sequence of sampled information relaxation problems. The solutions to these problems provide estimated upper and lower bounds on the optimal value, which we track via stochastic approximation. These quantities are then used to constrain the iterates to stay within the bounds at every iteration. Numerical experiments confirm the fast convergence of LBQL as compared to the standard Q-learning algorithm and several related techniques. Our approach is particularly appealing in problems that require expensive simulations or real-world interactions.

Featured Publications