'In-Between' Uncertainty in Bayesian Neural Networks
A. Foong, Y. Li, J. Hernández-Lobato, and R. Turner. (2019)cite arxiv:1906.11537Comment: Presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning.
Abstract
We describe a limitation in the expressiveness of the predictive uncertainty
estimate given by mean-field variational inference (MFVI), a popular
approximate inference method for Bayesian neural networks. In particular, MFVI
fails to give calibrated uncertainty estimates in between separated regions of
observations. This can lead to catastrophically overconfident predictions when
testing on out-of-distribution data. Avoiding such overconfidence is critical
for active learning, Bayesian optimisation and out-of-distribution robustness.
We instead find that a classical technique, the linearised Laplace
approximation, can handle 'in-between' uncertainty much better for small
network architectures.
Description
[1906.11537] 'In-Between' Uncertainty in Bayesian Neural Networks
%0 Journal Article
%1 foong2019inbetween
%A Foong, Andrew Y. K.
%A Li, Yingzhen
%A Hernández-Lobato, José Miguel
%A Turner, Richard E.
%D 2019
%K bayesian readings uncertainty
%T 'In-Between' Uncertainty in Bayesian Neural Networks
%U http://arxiv.org/abs/1906.11537
%X We describe a limitation in the expressiveness of the predictive uncertainty
estimate given by mean-field variational inference (MFVI), a popular
approximate inference method for Bayesian neural networks. In particular, MFVI
fails to give calibrated uncertainty estimates in between separated regions of
observations. This can lead to catastrophically overconfident predictions when
testing on out-of-distribution data. Avoiding such overconfidence is critical
for active learning, Bayesian optimisation and out-of-distribution robustness.
We instead find that a classical technique, the linearised Laplace
approximation, can handle 'in-between' uncertainty much better for small
network architectures.
@article{foong2019inbetween,
abstract = {We describe a limitation in the expressiveness of the predictive uncertainty
estimate given by mean-field variational inference (MFVI), a popular
approximate inference method for Bayesian neural networks. In particular, MFVI
fails to give calibrated uncertainty estimates in between separated regions of
observations. This can lead to catastrophically overconfident predictions when
testing on out-of-distribution data. Avoiding such overconfidence is critical
for active learning, Bayesian optimisation and out-of-distribution robustness.
We instead find that a classical technique, the linearised Laplace
approximation, can handle 'in-between' uncertainty much better for small
network architectures.},
added-at = {2020-02-11T15:55:56.000+0100},
author = {Foong, Andrew Y. K. and Li, Yingzhen and Hernández-Lobato, José Miguel and Turner, Richard E.},
biburl = {https://www.bibsonomy.org/bibtex/28685bb90f08b5967a919c5d8b6de36c9/kirk86},
description = {[1906.11537] 'In-Between' Uncertainty in Bayesian Neural Networks},
interhash = {7367e23a9441546f0974e5da5027ad11},
intrahash = {8685bb90f08b5967a919c5d8b6de36c9},
keywords = {bayesian readings uncertainty},
note = {cite arxiv:1906.11537Comment: Presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning},
timestamp = {2020-02-11T15:55:56.000+0100},
title = {'In-Between' Uncertainty in Bayesian Neural Networks},
url = {http://arxiv.org/abs/1906.11537},
year = 2019
}