On the Principles of Parsimony and Self-Consistency for the Emergence of
Intelligence
Y. Ma, D. Tsao, and H. Shum. (2022)cite arxiv:2207.04630Comment: 24 pages, 11 figures. This updated version makes changes in languages and adds a few additional references. This is the final version to be published.
Abstract
Ten years into the revival of deep networks and artificial intelligence, we
propose a theoretical framework that sheds light on understanding deep networks
within a bigger picture of Intelligence in general. We introduce two
fundamental principles, Parsimony and Self-consistency, that address two
fundamental questions regarding Intelligence: what to learn and how to learn,
respectively. We believe the two principles are the cornerstones for the
emergence of Intelligence, artificial or natural. While these two principles
have rich classical roots, we argue that they can be stated anew in entirely
measurable and computable ways. More specifically, the two principles lead to
an effective and efficient computational framework, compressive closed-loop
transcription, that unifies and explains the evolution of modern deep networks
and many artificial intelligence practices. While we mainly use modeling of
visual data as an example, we believe the two principles will unify
understanding of broad families of autonomous intelligent systems and provide a
framework for understanding the brain.
Description
On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence
cite arxiv:2207.04630Comment: 24 pages, 11 figures. This updated version makes changes in languages and adds a few additional references. This is the final version to be published
%0 Generic
%1 ma2022principles
%A Ma, Yi
%A Tsao, Doris
%A Shum, Heung-Yeung
%D 2022
%K machinelearning
%T On the Principles of Parsimony and Self-Consistency for the Emergence of
Intelligence
%U http://arxiv.org/abs/2207.04630
%X Ten years into the revival of deep networks and artificial intelligence, we
propose a theoretical framework that sheds light on understanding deep networks
within a bigger picture of Intelligence in general. We introduce two
fundamental principles, Parsimony and Self-consistency, that address two
fundamental questions regarding Intelligence: what to learn and how to learn,
respectively. We believe the two principles are the cornerstones for the
emergence of Intelligence, artificial or natural. While these two principles
have rich classical roots, we argue that they can be stated anew in entirely
measurable and computable ways. More specifically, the two principles lead to
an effective and efficient computational framework, compressive closed-loop
transcription, that unifies and explains the evolution of modern deep networks
and many artificial intelligence practices. While we mainly use modeling of
visual data as an example, we believe the two principles will unify
understanding of broad families of autonomous intelligent systems and provide a
framework for understanding the brain.
@misc{ma2022principles,
abstract = {Ten years into the revival of deep networks and artificial intelligence, we
propose a theoretical framework that sheds light on understanding deep networks
within a bigger picture of Intelligence in general. We introduce two
fundamental principles, Parsimony and Self-consistency, that address two
fundamental questions regarding Intelligence: what to learn and how to learn,
respectively. We believe the two principles are the cornerstones for the
emergence of Intelligence, artificial or natural. While these two principles
have rich classical roots, we argue that they can be stated anew in entirely
measurable and computable ways. More specifically, the two principles lead to
an effective and efficient computational framework, compressive closed-loop
transcription, that unifies and explains the evolution of modern deep networks
and many artificial intelligence practices. While we mainly use modeling of
visual data as an example, we believe the two principles will unify
understanding of broad families of autonomous intelligent systems and provide a
framework for understanding the brain.},
added-at = {2022-11-29T05:30:18.000+0100},
author = {Ma, Yi and Tsao, Doris and Shum, Heung-Yeung},
biburl = {https://www.bibsonomy.org/bibtex/2ceebe6e6573e11d66ddbc37aedb3e52b/sairahul},
description = {On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence},
interhash = {7333cf5a61c628e64a44b20a83b031b4},
intrahash = {ceebe6e6573e11d66ddbc37aedb3e52b},
keywords = {machinelearning},
note = {cite arxiv:2207.04630Comment: 24 pages, 11 figures. This updated version makes changes in languages and adds a few additional references. This is the final version to be published},
timestamp = {2022-11-29T05:30:18.000+0100},
title = {On the Principles of Parsimony and Self-Consistency for the Emergence of
Intelligence},
url = {http://arxiv.org/abs/2207.04630},
year = 2022
}