@csaba

Regularized Least-Squares Regression: Learning from a beta-mixing Sequence

, and . Journal of Statistical Planning and Inference, 142 (2): 493--505 (February 2012)

Abstract

We analyze the rate of convergence of the estimation error in regularized least-squares regression when the data is exponentially beta-mixing. The results are proven under the assumption that the metric entropy of the balls in the chosen function space grows at most polynomially. In order to prove our main result, we also derive a relative deviation concentration inequality for beta-mixing processes, which might be of independent interest. The other major techniques that we use are the independent-blocks technique and the peeling device. An interesting aspect of our analysis is that in order to obtain fast rates we have to make the block sizes dependent on the layer of peeling. With this approach, up to a logarithmic factor, we recover the optimal minimax rates available for the i.i.d. case. In particular, our rate asymptotically matches the optimal rate of convergence when the regression function belongs to a Sobolev space.

Links and resources

Tags