Abstract
Traditional statistical theory assumes that the analysis to be performed on a
given data set is selected independently of the data themselves. This
assumption breaks downs when data are re-used across analyses and the analysis
to be performed at a given stage depends on the results of earlier stages. Such
dependency can arise when the same data are used by several scientific studies,
or when a single analysis consists of multiple stages.
How can we draw statistically valid conclusions when data are re-used? This
is the focus of a recent and active line of work. At a high level, these
results show that limiting the information revealed by earlier stages of
analysis controls the bias introduced in later stages by adaptivity.
Here we review some known results in this area and highlight the role of
information-theoretic concepts, notably several one-shot notions of mutual
information.
Users
Please
log in to take part in the discussion (add own reviews or comments).