Abstract
The generalization bounds for stable algorithms is a classical question in
learning theory taking its roots in the early works of Vapnik and Chervonenkis
and Rogers and Wagner. In a series of recent breakthrough papers, Feldman and
Vondrak have shown that the best known high probability upper bounds for
uniformly stable learning algorithms due to Bousquet and Elisseeff are
sub-optimal in some natural regimes. To do so, they proved two generalization
bounds that significantly outperform the original generalization bound. Feldman
and Vondrak also asked if it is possible to provide sharper bounds and prove
corresponding high probability lower bounds. This paper is devoted to these
questions: firstly, inspired by the original arguments of, we provide a short
proof of the moment bound that implies the generalization bound stronger than
both recent results. Secondly, we prove general lower bounds, showing that our
moment bound is sharp (up to a logarithmic factor) unless some additional
properties of the corresponding random variables are used. Our main
probabilistic result is a general concentration inequality for weakly
correlated random variables, which may be of independent interest.
Users
Please
log in to take part in the discussion (add own reviews or comments).