Article,

Model Selection in Reinforcement Learning

, and .
Machine Learning Journal, 85 (3): 299--332 (December 2011)
DOI: 10.1007/s10994-011-5254-7

Abstract

(This version is identical to the MLJ version except that in the proof of Theorem 2 a minor issue in the proof is corrected.) We consider the problem of model selection in the batch (offline, non-interactive) rein- forcement learning setting when the goal is to find an action-value function with the smallest Bellman error among a countable set of candidates functions. We propose a complexity regularization-based model selection algorithm, BErMin, and prove that it enjoys an oracle-like property: the estimator's error differs from that of an oracle, who selects the candidate with the minimum Bell- man error, by only a constant factor and a small remainder term that vanishes at a parametric rate as the number of samples increases. As an application, we consider a problem when the true action-value function belongs to an unknown member of a nested sequence of function spaces. We show that under some additional technical conditions BErMin leads to a procedure whose rate of convergence, up to a constant factor, matches that of an oracle who knows which of the nested function spaces the true action-value function belongs to, i.e., the procedure achieves adaptivity.

Tags

Users

  • @csaba

Comments and Reviews