Inproceedings,

On Calibration of Modern Neural Networks

, , , and .
roceedings of the 34th International Conference on Machine Learning, 70, page 1321-1330. PMLR, (2017)

Abstract

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling – a single-parameter variant of Platt Scaling – is surprisingly effective at calibrating predictions.

Tags

Users

  • @jaeschke
  • @tobias.koopmann
  • @kirk86
  • @cpankow
  • @dblp

Comments and Reviews