Publication

Learning Near Optimal Policies with Low Inherent Bellman Error

International Conference on Machine Learning (ICML)


Abstract

We study the exploration problem with approximate linear action-value functions in episodic reinforcement learning under the notion of low inherent Bellman error, a condition normally employed to show convergence of approximate value iteration. First we relate this condition to other common frameworks and show that it is strictly more general than the low rank (or linear) MDP assumption of prior work. Second we provide an algorithm with a high probability regret bound Õ(∑Ht=1 dtK + ∑Ht=1dtIK) where H is the horizon, K is the number of episodes, I is the value if the inherent Bellman error and dt is the feature dimension at timestep t. In addition, we show that the result is unimprovable beyond constants and logs by showing a matching lower bound. This has two important consequences: 1) it shows that exploration is possible using only batch assumptions with an algorithm that achieves the optimal statistical rate for the setting we consider, which is more general than prior work on low-rank MDPs 2) the lack of closedness (measured by the inherent Bellman error) is only amplified by √dt despite working in the online setting. Finally, the algorithm reduces to the celebrated LINUCB when H = 1 but with a different choice of the exploration parameter that allows handling misspecified contextual linear bandits. While computational tractability questions remain open for the MDP setting, this enriches the class of MDPs with a linear representation for the action-value function where statistically efficient reinforcement learning is possible.

Related Publications

All Publications

MICCAI - October 5, 2020

Active MR k-space Sampling with Reinforcement Learning

Luis Pineda, Sumana Basu, Adriana Romero, Roberto Calandra, Michal Drozdzal

Multimodal Video Analysis Workshop at ECCV - August 23, 2020

Audio-Visual Instance Discrimination

Pedro Morgado, Nuno Vasconcelos, Ishan Misra

AISTATS - November 3, 2020

A single algorithm for both restless and rested rotting bandits

Julien Seznec, Pierre Menard, Alessandro Lazaric, Michal Valko

COLT - November 3, 2020

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy