hide
Free keywords:
Dual stream model; Language; Neurolinguistics; Speech
Abstract:
Speech perception refers to the suite of (neural, computational, cognitive) operations that transform auditory input signals into representations that can make contact with internally stored information: the words in a listener’s mental lexicon. Speech perception is typically studied using single speech sounds (e.g., vowels or syllables), spoken words, or connected speech. Based on neuroimaging, lesion, and electrophysiological data, dual stream neurocognitive models of speech perception have been proposed that identify ventral stream (mapping from sound to meaning) and dorsal stream functions (mapping from sound articulation). Major outstanding research questions include cerebral lateralization, the role of neuronal oscillations, and the contribution of top-down, abstract knowledge in perception.