Generalization in Deep Learning
, , and .
(2017)cite arxiv:1710.05468Comment: To appear in Mathematics of Deep Learning, Cambridge University Press. All previous results remain unchanged.

This paper provides non-vacuous and numerically-tight generalization guarantees for deep learning, as well as theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature. We also propose new open problems and discuss the limitations of our results.
  • @kirk86
  • @loroch
  • @jk_itwm
  • @dblp
This publication has not been reviewed yet.

rating distribution
average user rating0.0 out of 5.0 based on 0 reviews
    Please log in to take part in the discussion (add own reviews or comments).