hide
Free keywords:
-
Abstract:
During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g. syllables or words) necessary for speech comprehension. Recent hypotheses have proposed that neural oscillations contribute to speech parsing but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing
using bistable speech sequences. While listening to speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not solely follow the acoustic properties but also shift in time according to participant’s conscious speech percept. Our results show two issociable
markers of neural-speech tracking under endogenous control: small modulations in low-frequency
oscillations and variable latencies of high-frequency activity (sp. beta and gamma bands). While changes in low-frequency neural oscillations are compatible with the encoding of pre-lexical segmentation cues, high-frequency activity specifically informed on an individual’s conscious speech percept.