English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Preprint

Dissociating endogenous and exogenous delta activity during natural speech comprehension

MPS-Authors
/persons/resource/persons19855

Meyer,  Lars       
Max Planck Research Group Language Cycles, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons275094

Lo,  Chiawen
Max Planck Research Group Language Cycles, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Chalas_pre.pdf
(Preprint), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Chalas, N., Meyer, L., Lo, C., Park, H., Kluger, D. S., Abbasi, O., et al. (2024). Dissociating endogenous and exogenous delta activity during natural speech comprehension. bioRxiv. doi:10.1101/2024.02.01.578181.


Cite as: https://hdl.handle.net/21.11116/0000-000E-59CC-9
Abstract
Decoding human speech requires the brain to segment the incoming acoustic signal into meaningful linguistic units, ranging from syllables and words to phrases. Integrating these linguistic constituents into a coherent percept sets the root of compositional meaning and hence understanding. One important cue for segmentation in natural speech are prosodic cues, such as pauses, but their interplay with higher-level linguistic processing is still unknown. Here we dissociate the neural tracking of prosodic pauses from the segmentation of multi-word chunks using magnetoencephalography (MEG). We find that manipulating the regularity of pauses disrupts slow speech-brain tracking bilaterally in auditory areas (below 2 Hz) and in turn increases left-lateralized coherence of higher frequency auditory activity at speech onsets (around 25 - 45 Hz). Critically, we also find that multi-word chunks—defined as short, coherent bundles of inter-word dependencies—are processed through the rhythmic fluctuations of low frequency activity (below 2 Hz) bilaterally and independently of prosodic cues. Importantly, low-frequency alignment at chunk onsets increases the accuracy of an encoding model in bilateral auditory and frontal areas, while controlling for the effect of acoustics. Our findings provide novel insights into the neural basis of speech perception, demonstrating that both acoustic features (prosodic cues) and abstract processing at the multi-word timescale are underpinned independently by low-frequency electrophysiological brain activity.