Abstract
Spike-and-Slab Deep Learning (SS-DL) is a fully Bayesian alternative to
Dropout for improving generalizability of deep ReLU networks. This new type of
regularization enables provable recovery of smooth input-output maps with
unknown levels of smoothness. Indeed, we show that the posterior distribution
concentrates at the near minimax rate for $\alpha$-Hölder smooth maps,
performing as well as if we knew the smoothness level $\alpha$ ahead of time.
Our result sheds light on architecture design for deep neural networks, namely
the choice of depth, width and sparsity level. These network attributes
typically depend on unknown smoothness in order to be optimal. We obviate this
constraint with the fully Bayes construction. As an aside, we show that SS-DL
does not overfit in the sense that the posterior concentrates on smaller
networks with fewer (up to the optimal number of) nodes and links. Our results
provide new theoretical justifications for deep ReLU networks from a Bayesian
point of view.
Users
Please
log in to take part in the discussion (add own reviews or comments).