Abstract
When Newell introduced the concept of the knowledge level as a
useful level of description for computer systems, he focused on
the representation of knowledge. This paper applies the knowledge
level notion to the problem of knowledge acquisition. Two
interesting issues arise. First, some existing machine learning
programs appear to be completely static when viewed at the
knowledge level. These programs improve their performance without
changing their ``knowledge". Second, the behaviour of some other
machine learning programs cannot be predicted or described at the
knowledge level. These programs take unjustified inductive leaps.
The first programs are called symbol level learning (SLL)
programs; the second, non-deductive knowledge level learning
(NKLL) programs. The paper analyzes both of these classes of
learning programs and speculates on the possibility of developing
coherent theories of each. A theory of symbol level learning is
sketched, and some reasons are presented for believing that a
theory of NKLL will be difficult to obtain.
Users
Please
log in to take part in the discussion (add own reviews or comments).