Zusammenfassung
A primary concern of excessive reuse of test datasets in machine learning is
that it can lead to overfitting. Multiclass classification was recently shown
to be more resistant to overfitting than binary classification. In an open
problem of COLT 2019, Feldman, Frostig, and Hardt ask to characterize the
dependence of the amount of overfitting bias with the number of classes $m$,
the number of accuracy queries $k$, and the number of examples in the dataset
$n$. We resolve this problem and determine the amount of overfitting possible
in multi-class classification. We provide computationally efficient algorithms
that achieve overfitting bias of $\Theta(\max\k/(mn),
k/n\)$, matching the known upper bounds.
Nutzer