Article,

Lightlike Neuromanifolds, Occam's Razor and Deep Learning

, and .
(2019)cite arxiv:1905.11027Comment: Submitted to NeurIPS 2019.

Abstract

Why do deep neural networks generalize with a very high dimensional parameter space? We took an information theoretic approach. We find that the dimensionality of the parameter space can be studied by singular semi-Riemannian geometry and is upper-bounded by the sample size. We adapt Fisher information to this singular neuromanifold. We use random matrix theory to derive a minimum description length of a deep learning model, where the spectrum of the Fisher information matrix plays a key role to improve generalisation.

Tags

Users

  • @kirk86
  • @dblp

Comments and Reviews