Excessive reuse of test data has become commonplace in today's machine
learning workflows. Popular benchmarks, competitions, industrial scale tuning,
among other applications, all involve test data reuse beyond guidance by
statistical confidence bounds. Nonetheless, recent replication studies give
evidence that popular benchmarks continue to support progress despite years of
extensive reuse. We proffer a new explanation for the apparent longevity of
test data: Many proposed models are similar in their predictions and we prove
that this similarity mitigates overfitting. Specifically, we show empirically
that models proposed for the ImageNet ILSVRC benchmark agree in their
predictions well beyond what we can conclude from their accuracy levels alone.
Likewise, models created by large scale hyperparameter search enjoy high levels
of similarity. Motivated by these empirical observations, we give a
non-asymptotic generalization bound that takes similarity into account, leading
to meaningful confidence bounds in practical settings.
[1905.12580] Model Similarity Mitigates Test Set Overuse