June 10, 2019
Making Deep Q-learning Methods Robust to Time Discretization
International Conference on Machine Learning (ICML)
Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time discretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller.
By: Corentin Tallec, Léonard Blier, Yann Ollivier
Facebook AI Research