Abstract
Probabilistic classifiers output a probability distribution on target classes
rather than just a class prediction. Besides providing a clear separation of
prediction and decision making, the main advantage of probabilistic models is
their ability to represent uncertainty about predictions. In safety-critical
applications, it is pivotal for a model to possess an adequate sense of
uncertainty, which for probabilistic classifiers translates into outputting
probability distributions that are consistent with the empirical frequencies
observed from realized outcomes. A classifier with such a property is called
calibrated. In this work, we develop a general theoretical calibration
evaluation framework grounded in probability theory, and point out subtleties
present in model calibration evaluation that lead to refined interpretations of
existing evaluation techniques. Lastly, we propose new ways to quantify and
visualize miscalibration in probabilistic classification, including novel
multidimensional reliability diagrams.
Users
Please
log in to take part in the discussion (add own reviews or comments).