Abstract
Computational results demonstrate that posterior sampling for reinforcement
learning (PSRL) dramatically outperforms algorithms driven by optimism, such as
UCRL2. We provide insight into the extent of this performance boost and the
phenomenon that drives it. We leverage this insight to establish an
$O(HSAT)$ Bayesian expected regret bound for PSRL in
finite-horizon episodic Markov decision processes, where $H$ is the horizon,
$S$ is the number of states, $A$ is the number of actions and $T$ is the time
elapsed. This improves upon the best previous bound of $O(H S
AT)$ for any reinforcement learning algorithm.
Users
Please
log in to take part in the discussion (add own reviews or comments).