Article,

Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection

, and .
(2019)cite arxiv:1912.05651Comment: 18 pages, extended version with supplementary material.

Abstract

Despite their successes, deep neural networks still make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety. While this motivated a recent surge in interest in developing methods to detect such out-of-distribution (OoD) inputs, a robust solution is still lacking. We propose a new probabilistic, unsupervised approach to this problem based on a Bayesian variational autoencoder model, which estimates a full posterior distribution over the decoder parameters using stochastic gradient Markov chain Monte Carlo, instead of fitting a point estimate. We describe how information-theoretic measures based on this posterior can then be used to detect OoD data both in input space as well as in the model's latent space. The effectiveness of our approach is empirically demonstrated.

Tags

Users

  • @kirk86
  • @dblp

Comments and Reviews