Language is crucial for human intelligence, but what exactly is its role? We
take language to be a part of a system for understanding and communicating
about situations. The human ability to understand and communicate about
situations emerges gradually from experience and depends on domain-general
principles of biological neural networks: connection-based learning,
distributed representation, and context-sensitive, mutual constraint
satisfaction-based processing. Current artificial language processing systems
rely on the same domain general principles, embodied in artificial neural
networks. Indeed, recent progress in this field depends on query-based
attention, which extends the ability of these systems to exploit context and
has contributed to remarkable breakthroughs. Nevertheless, most current models
focus exclusively on language-internal tasks, limiting their ability to perform
tasks that depend on understanding situations. These systems also lack memory
for the contents of prior situations outside of a fixed contextual span. We
describe the organization of the brain's distributed understanding system,
which includes a fast learning system that addresses the memory problem. We
sketch a framework for future models of understanding drawing equally on
cognitive neuroscience and artificial intelligence and exploiting query-based
attention. We highlight relevant current directions and consider further
developments needed to fully capture human-level language understanding in a
computational system.
Description
Extending Machine Language Models toward Human-Level Language Understanding
%0 Generic
%1 mcclelland2019extending
%A McClelland, James L.
%A Hill, Felix
%A Rudolph, Maja
%A Baldridge, Jason
%A Schütze, Hinrich
%D 2019
%K attention cognition
%T Extending Machine Language Models toward Human-Level Language
Understanding
%U http://arxiv.org/abs/1912.05877
%X Language is crucial for human intelligence, but what exactly is its role? We
take language to be a part of a system for understanding and communicating
about situations. The human ability to understand and communicate about
situations emerges gradually from experience and depends on domain-general
principles of biological neural networks: connection-based learning,
distributed representation, and context-sensitive, mutual constraint
satisfaction-based processing. Current artificial language processing systems
rely on the same domain general principles, embodied in artificial neural
networks. Indeed, recent progress in this field depends on query-based
attention, which extends the ability of these systems to exploit context and
has contributed to remarkable breakthroughs. Nevertheless, most current models
focus exclusively on language-internal tasks, limiting their ability to perform
tasks that depend on understanding situations. These systems also lack memory
for the contents of prior situations outside of a fixed contextual span. We
describe the organization of the brain's distributed understanding system,
which includes a fast learning system that addresses the memory problem. We
sketch a framework for future models of understanding drawing equally on
cognitive neuroscience and artificial intelligence and exploiting query-based
attention. We highlight relevant current directions and consider further
developments needed to fully capture human-level language understanding in a
computational system.
@misc{mcclelland2019extending,
abstract = {Language is crucial for human intelligence, but what exactly is its role? We
take language to be a part of a system for understanding and communicating
about situations. The human ability to understand and communicate about
situations emerges gradually from experience and depends on domain-general
principles of biological neural networks: connection-based learning,
distributed representation, and context-sensitive, mutual constraint
satisfaction-based processing. Current artificial language processing systems
rely on the same domain general principles, embodied in artificial neural
networks. Indeed, recent progress in this field depends on \emph{query-based
attention}, which extends the ability of these systems to exploit context and
has contributed to remarkable breakthroughs. Nevertheless, most current models
focus exclusively on language-internal tasks, limiting their ability to perform
tasks that depend on understanding situations. These systems also lack memory
for the contents of prior situations outside of a fixed contextual span. We
describe the organization of the brain's distributed understanding system,
which includes a fast learning system that addresses the memory problem. We
sketch a framework for future models of understanding drawing equally on
cognitive neuroscience and artificial intelligence and exploiting query-based
attention. We highlight relevant current directions and consider further
developments needed to fully capture human-level language understanding in a
computational system.},
added-at = {2021-06-09T04:33:40.000+0200},
author = {McClelland, James L. and Hill, Felix and Rudolph, Maja and Baldridge, Jason and Schütze, Hinrich},
biburl = {https://www.bibsonomy.org/bibtex/2f1daa6bc15b01dd879bd0ad56cc5895d/kvdberg},
description = {Extending Machine Language Models toward Human-Level Language Understanding},
interhash = {9b75caf919091eaae1afda8cc5d811f5},
intrahash = {f1daa6bc15b01dd879bd0ad56cc5895d},
keywords = {attention cognition},
note = {cite arxiv:1912.05877},
timestamp = {2021-06-09T04:33:40.000+0200},
title = {Extending Machine Language Models toward Human-Level Language
Understanding},
url = {http://arxiv.org/abs/1912.05877},
year = 2019
}