Аннотация
This paper provides non-vacuous and numerically-tight generalization
guarantees for deep learning, as well as theoretical insights into why and how
deep learning can generalize well, despite its large capacity, complexity,
possible algorithmic instability, nonrobustness, and sharp minima, responding
to an open question in the literature. We also propose new open problems and
discuss the limitations of our results.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)