@sairahul

On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence

, , and . (2022)cite arxiv:2207.04630Comment: 24 pages, 11 figures. This updated version makes changes in languages and adds a few additional references. This is the final version to be published.

Abstract

Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, that address two fundamental questions regarding Intelligence: what to learn and how to learn, respectively. We believe the two principles are the cornerstones for the emergence of Intelligence, artificial or natural. While these two principles have rich classical roots, we argue that they can be stated anew in entirely measurable and computable ways. More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks and many artificial intelligence practices. While we mainly use modeling of visual data as an example, we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.

Description

On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence

Links and resources

Tags

community

  • @sairahul
  • @dblp
@sairahul's tags highlighted