@jk_itwm

Deep Learning Works in Practice. But Does it Work in Theory?

, and . (2018)cite arxiv:1801.10437Comment: 6 pages, 4 figures.

Abstract

Deep learning relies on a very specific kind of neural networks: those superposing several neural layers. In the last few years, deep learning achieved major breakthroughs in many tasks such as image analysis, speech recognition, natural language processing, and so on. Yet, there is no theoretical explanation of this success. In particular, it is not clear why the deeper the network, the better it actually performs. We argue that the explanation is intimately connected to a key feature of the data collected from our surrounding universe to feed the machine learning algorithms: large non-parallelizable logical depth. Roughly speaking, we conjecture that the shortest computational descriptions of the universe are algorithms with inherently large computation times, even when a large number of computers are available for parallelization. Interestingly, this conjecture, combined with the folklore conjecture in theoretical computer science that $ P NC$, explains the success of deep learning.

Description

1801.10437.pdf

Links and resources

Tags

community

  • @jk_itwm
  • @dblp
@jk_itwm's tags highlighted