Abstract
The organizer of a machine learning competition faces the problem of
maintaining an accurate leaderboard that faithfully represents the quality of
the best submission of each competing team. What makes this estimation problem
particularly challenging is its sequential and adaptive nature. As participants
are allowed to repeatedly evaluate their submissions on the leaderboard, they
may begin to overfit to the holdout data that supports the leaderboard. Few
theoretical results give actionable advice on how to design a reliable
leaderboard. Existing approaches therefore often resort to poorly understood
heuristics such as limiting the bit precision of answers and the rate of
re-submission.
In this work, we introduce a notion of "leaderboard accuracy" tailored to the
format of a competition. We introduce a natural algorithm called "the Ladder"
and demonstrate that it simultaneously supports strong theoretical guarantees
in a fully adaptive model of estimation, withstands practical adversarial
attacks, and achieves high utility on real submission files from an actual
competition hosted by Kaggle.
Notably, we are able to sidestep a powerful recent hardness result for
adaptive risk estimation that rules out algorithms such as ours under a
seemingly very similar notion of accuracy. On a practical note, we provide a
completely parameter-free variant of our algorithm that can be deployed in a
real competition with no tuning required whatsoever.
Users
Please
log in to take part in the discussion (add own reviews or comments).