Article,

Inherent Trade-Offs in the Fair Determination of Risk Scores

, , and .
(2016)cite arxiv:1609.05807Comment: To appear in Proceedings of Innovations in Theoretical Computer Science (ITCS), 2017.

Abstract

Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

Tags

Users

  • @kirk86
  • @dblp

Comments and Reviews