Abstract
Similarities and differences between language and music processing
are examined from an evolutionary and a cognitive perspective. Language
and music cannot be considered single entities; they need to be decomposed
into different component operations or levels of processing. The
central question concerns one of the most important claims of the
generative grammar theory, that is, the specificity of language processing:
do the computations performed to process language rely on specific
linguistic processes or do they rely on general cognitive principles?
Evidence from brain imaging results is reviewed, noting that this
field is currently in need of metanalysis of the available results
to precisely evaluate this claim. A series of experiments, mainly
using the event-related brain potentials method, were conducted to
compare different levels of processing in language and music. Overall,
results favor language specificity when certain aspects of semantic
processing in language are compared with certain aspects of melodic
and harmonic processing in music. By contrast, results support the
view that general cognitive principles are involved when aspects
of syntactic processing in language are compared with aspects of
harmonic processing in music. Moreover, analysis of the temporal
structure led to similar effects in language and music. These tentative
conclusions must be supported by other brain imaging results to shed
further light on the spatiotemporal dynamics of the brain structure-function
relationship.
Users
Please
log in to take part in the discussion (add own reviews or comments).