Article,

'In-Between' Uncertainty in Bayesian Neural Networks

, , , and .
(2019)cite arxiv:1906.11537Comment: Presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning.

Abstract

We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks. In particular, MFVI fails to give calibrated uncertainty estimates in between separated regions of observations. This can lead to catastrophically overconfident predictions when testing on out-of-distribution data. Avoiding such overconfidence is critical for active learning, Bayesian optimisation and out-of-distribution robustness. We instead find that a classical technique, the linearised Laplace approximation, can handle 'in-between' uncertainty much better for small network architectures.

Tags

Users

  • @kirk86
  • @dblp

Comments and Reviews