When predictions support decisions they may influence the outcome they aim to
predict. We call such predictions performative; the prediction influences the
target. Performativity is a well-studied phenomenon in policy-making that has
so far been neglected in supervised learning. When ignored, performativity
surfaces as undesirable distribution shift, routinely addressed with
retraining.
We develop a risk minimization framework for performative prediction bringing
together concepts from statistics, game theory, and causality. A conceptual
novelty is an equilibrium notion we call performative stability. Performative
stability implies that the predictions are calibrated not against past
outcomes, but against the future outcomes that manifest from acting on the
prediction. Our main results are necessary and sufficient conditions for the
convergence of retraining to a performatively stable point of nearly minimal
loss.
In full generality, performative prediction strictly subsumes the setting
known as strategic classification. We thus also give the first sufficient
conditions for retraining to overcome strategic feedback effects.
%0 Journal Article
%1 perdomo2020performative
%A Perdomo, Juan C.
%A Zrnic, Tijana
%A Mendler-Dünner, Celestine
%A Hardt, Moritz
%D 2020
%K optimization readings uncertainty
%T Performative Prediction
%U http://arxiv.org/abs/2002.06673
%X When predictions support decisions they may influence the outcome they aim to
predict. We call such predictions performative; the prediction influences the
target. Performativity is a well-studied phenomenon in policy-making that has
so far been neglected in supervised learning. When ignored, performativity
surfaces as undesirable distribution shift, routinely addressed with
retraining.
We develop a risk minimization framework for performative prediction bringing
together concepts from statistics, game theory, and causality. A conceptual
novelty is an equilibrium notion we call performative stability. Performative
stability implies that the predictions are calibrated not against past
outcomes, but against the future outcomes that manifest from acting on the
prediction. Our main results are necessary and sufficient conditions for the
convergence of retraining to a performatively stable point of nearly minimal
loss.
In full generality, performative prediction strictly subsumes the setting
known as strategic classification. We thus also give the first sufficient
conditions for retraining to overcome strategic feedback effects.
@article{perdomo2020performative,
abstract = {When predictions support decisions they may influence the outcome they aim to
predict. We call such predictions performative; the prediction influences the
target. Performativity is a well-studied phenomenon in policy-making that has
so far been neglected in supervised learning. When ignored, performativity
surfaces as undesirable distribution shift, routinely addressed with
retraining.
We develop a risk minimization framework for performative prediction bringing
together concepts from statistics, game theory, and causality. A conceptual
novelty is an equilibrium notion we call performative stability. Performative
stability implies that the predictions are calibrated not against past
outcomes, but against the future outcomes that manifest from acting on the
prediction. Our main results are necessary and sufficient conditions for the
convergence of retraining to a performatively stable point of nearly minimal
loss.
In full generality, performative prediction strictly subsumes the setting
known as strategic classification. We thus also give the first sufficient
conditions for retraining to overcome strategic feedback effects.},
added-at = {2020-03-04T00:45:11.000+0100},
author = {Perdomo, Juan C. and Zrnic, Tijana and Mendler-Dünner, Celestine and Hardt, Moritz},
biburl = {https://www.bibsonomy.org/bibtex/2f8e3f719de64dea58bd124ffc4da8af8/kirk86},
description = {[2002.06673] Performative Prediction},
interhash = {eb3a28c95a7c26ad5c828ea2ebb69173},
intrahash = {f8e3f719de64dea58bd124ffc4da8af8},
keywords = {optimization readings uncertainty},
note = {cite arxiv:2002.06673Comment: 31 pages, 4 figures},
timestamp = {2020-03-04T00:45:11.000+0100},
title = {Performative Prediction},
url = {http://arxiv.org/abs/2002.06673},
year = 2020
}