Abstract
To make deliberate progress towards more intelligent and more human-like
artificial systems, we need to be following an appropriate feedback signal: we
need to be able to define and evaluate intelligence in a way that enables
comparisons between two systems, as well as comparisons with humans. Over the
past hundred years, there has been an abundance of attempts to define and
measure intelligence, across both the fields of psychology and AI. We summarize
and critically assess these definitions and evaluation approaches, while making
apparent the two historical conceptions of intelligence that have implicitly
guided them. We note that in practice, the contemporary AI community still
gravitates towards benchmarking intelligence by comparing the skill exhibited
by AIs and humans at specific tasks such as board games and video games. We
argue that solely measuring skill at any given task falls short of measuring
intelligence, because skill is heavily modulated by prior knowledge and
experience: unlimited priors or unlimited training data allow experimenters to
"buy" arbitrary levels of skills for a system, in a way that masks the system's
own generalization power. We then articulate a new formal definition of
intelligence based on Algorithmic Information Theory, describing intelligence
as skill-acquisition efficiency and highlighting the concepts of scope,
generalization difficulty, priors, and experience. Using this definition, we
propose a set of guidelines for what a general AI benchmark should look like.
Finally, we present a benchmark closely following these guidelines, the
Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors
designed to be as close as possible to innate human priors. We argue that ARC
can be used to measure a human-like form of general fluid intelligence and that
it enables fair general intelligence comparisons between AI systems and humans.
Users
Please
log in to take part in the discussion (add own reviews or comments).