Abstract
Reconstructing intended speech from neural activity using
brain-computer interfaces holds great promises for people with
severe speech production deficits. While decoding overt speech
has progressed, decoding imagined speech has met limited success,
mainly because the associated neural signals are weak and
variable compared to overt speech, hence difficult to decode by
learning algorithms. We obtained three electrocorticography
datasets from 13 patients, with electrodes implanted for epilepsy
evaluation, who performed overt and imagined speech production
tasks. Based on recent theories of speech neural processing, we
extracted consistent and specific neural features usable for
future brain computer interfaces, and assessed their performance
to discriminate speech items in articulatory, phonetic, and
vocalic representation spaces. While high-frequency activity
provided the best signal for overt speech, both low- and
higher-frequency power and local cross-frequency contributed to
imagined speech decoding, in particular in phonetic and vocalic,
i.e. perceptual, spaces. These findings show that low-frequency
power and cross-frequency dynamics contain key information for
imagined speech decoding.
Users
Please
log in to take part in the discussion (add own reviews or comments).