Abstract
The celebrated minimax principle of Yao (1977) says that for any
Boolean-valued function $f$ with finite domain, there is a distribution $\mu$
over the domain of $f$ such that computing $f$ to error $\epsilon$ against
inputs from $\mu$ is just as hard as computing $f$ to error $\epsilon$ on
worst-case inputs. Notably, however, the distribution $\mu$ depends on the
target error level $\epsilon$: the hard distribution which is tight for bounded
error might be trivial to solve to small bias, and the hard distribution which
is tight for a small bias level might be far from tight for bounded error
levels.
In this work, we introduce a new type of minimax theorem which can provide a
hard distribution $\mu$ that works for all bias levels at once. We show that
this works for randomized query complexity, randomized communication
complexity, some randomized circuit models, quantum query and communication
complexities, approximate polynomial degree, and approximate logrank. We also
prove an improved version of Impagliazzo's hardcore lemma.
Our proofs rely on two innovations over the classical approach of using Von
Neumann's minimax theorem or linear programming duality. First, we use Sion's
minimax theorem to prove a minimax theorem for ratios of bilinear functions
representing the cost and score of algorithms.
Second, we introduce a new way to analyze low-bias randomized algorithms by
viewing them as "forecasting algorithms" evaluated by a proper scoring rule.
The expected score of the forecasting version of a randomized algorithm appears
to be a more fine-grained way of analyzing the bias of the algorithm. We show
that such expected scores have many elegant mathematical properties: for
example, they can be amplified linearly instead of quadratically. We anticipate
forecasting algorithms will find use in future work in which a fine-grained
analysis of small-bias algorithms is required.
Users
Please
log in to take part in the discussion (add own reviews or comments).