Abstract
Deep learning has been at the foundation of large improvements in image
classification. To improve the robustness of predictions, Bayesian
approximations have been used to learn parameters in deep neural networks. We
follow an alternative approach, by using Gaussian processes as building blocks
for Bayesian deep learning models, which has recently become viable due to
advances in inference for convolutional and deep structure. We investigate deep
convolutional Gaussian processes, and identify a problem that holds back
current performance. To remedy the issue, we introduce a translation
insensitive convolutional kernel, which removes the restriction of requiring
identical outputs for identical patch inputs. We show empirically that this
convolutional kernel improves performances in both shallow and deep models. On
MNIST, FASHION-MNIST and CIFAR-10 we improve previous GP models in terms of
accuracy, with the addition of having more calibrated predictive probabilities
than simple DNN models.
Users
Please
log in to take part in the discussion (add own reviews or comments).