Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer
Vision
F. Gustafsson, M. Danelljan, and T. Schön. (2019)cite arxiv:1906.01620Comment: Code is available at https://github.com/fregu856/evaluating_bdl.
Abstract
While Deep Neural Networks (DNNs) have become the go-to approach in computer
vision, the vast majority of these models fail to properly capture the
uncertainty inherent in their predictions. Estimating this predictive
uncertainty can be crucial, for instance in automotive applications. In
Bayesian deep learning, predictive uncertainty is often decomposed into the
distinct types of aleatoric and epistemic uncertainty. The former can be
estimated by letting a DNN output the parameters of a probability distribution.
Epistemic uncertainty estimation is a more challenging problem, and while
different scalable methods recently have emerged, no comprehensive comparison
has been performed in a real-world setting. We therefore accept this task and
propose an evaluation framework for predictive uncertainty estimation that is
specifically designed to test the robustness required in real-world computer
vision applications. Using the proposed framework, we perform an extensive
comparison of the popular ensembling and MC-dropout methods on the tasks of
depth completion and street-scene semantic segmentation. Our comparison
suggests that ensembling consistently provides more reliable uncertainty
estimates. Code is available at https://github.com/fregu856/evaluating_bdl.
Description
[1906.01620] Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
%0 Journal Article
%1 gustafsson2019evaluating
%A Gustafsson, Fredrik K.
%A Danelljan, Martin
%A Schön, Thomas B.
%D 2019
%K bayesian deep-learning uncertainty
%T Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer
Vision
%U http://arxiv.org/abs/1906.01620
%X While Deep Neural Networks (DNNs) have become the go-to approach in computer
vision, the vast majority of these models fail to properly capture the
uncertainty inherent in their predictions. Estimating this predictive
uncertainty can be crucial, for instance in automotive applications. In
Bayesian deep learning, predictive uncertainty is often decomposed into the
distinct types of aleatoric and epistemic uncertainty. The former can be
estimated by letting a DNN output the parameters of a probability distribution.
Epistemic uncertainty estimation is a more challenging problem, and while
different scalable methods recently have emerged, no comprehensive comparison
has been performed in a real-world setting. We therefore accept this task and
propose an evaluation framework for predictive uncertainty estimation that is
specifically designed to test the robustness required in real-world computer
vision applications. Using the proposed framework, we perform an extensive
comparison of the popular ensembling and MC-dropout methods on the tasks of
depth completion and street-scene semantic segmentation. Our comparison
suggests that ensembling consistently provides more reliable uncertainty
estimates. Code is available at https://github.com/fregu856/evaluating_bdl.
@article{gustafsson2019evaluating,
abstract = {While Deep Neural Networks (DNNs) have become the go-to approach in computer
vision, the vast majority of these models fail to properly capture the
uncertainty inherent in their predictions. Estimating this predictive
uncertainty can be crucial, for instance in automotive applications. In
Bayesian deep learning, predictive uncertainty is often decomposed into the
distinct types of aleatoric and epistemic uncertainty. The former can be
estimated by letting a DNN output the parameters of a probability distribution.
Epistemic uncertainty estimation is a more challenging problem, and while
different scalable methods recently have emerged, no comprehensive comparison
has been performed in a real-world setting. We therefore accept this task and
propose an evaluation framework for predictive uncertainty estimation that is
specifically designed to test the robustness required in real-world computer
vision applications. Using the proposed framework, we perform an extensive
comparison of the popular ensembling and MC-dropout methods on the tasks of
depth completion and street-scene semantic segmentation. Our comparison
suggests that ensembling consistently provides more reliable uncertainty
estimates. Code is available at https://github.com/fregu856/evaluating_bdl.},
added-at = {2019-12-03T01:23:46.000+0100},
author = {Gustafsson, Fredrik K. and Danelljan, Martin and Schön, Thomas B.},
biburl = {https://www.bibsonomy.org/bibtex/2d876433df9155f4b6207d831f3392d51/kirk86},
description = {[1906.01620] Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision},
interhash = {977123fc009f164831b382de7e84fa27},
intrahash = {d876433df9155f4b6207d831f3392d51},
keywords = {bayesian deep-learning uncertainty},
note = {cite arxiv:1906.01620Comment: Code is available at https://github.com/fregu856/evaluating_bdl},
timestamp = {2019-12-03T01:23:46.000+0100},
title = {Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer
Vision},
url = {http://arxiv.org/abs/1906.01620},
year = 2019
}