Abstract
A central goal of research in neurobiology of language
is to discover the neural underpinning of concepts
such as “phoneme”, “morpheme”, “word”, “lemma”
and “phrase”, conditions such as “agreement” and
operations such as “wh-movement”, which are defined
in Linguistics. However, a large proportion of these
concepts and operations originated as devices for
meta-level discussions about sentences, ages before
scientists started asking questions about the neural
and cognitive processes that underlie the production
and comprehension of utterances. At least some of the
concepts may have no neural substance at all, even if
they have been successfully invoked in explaining the
results of psycholinguistic experiments. In the theory
of speech comprehension it is hotly debated what the
basic units of processing and representation are. The
majority view still holds that the basic units are abstract
phonemes or bundles of distinctive features, but there
is increasing support for theories that take episodes
or exemplars as the basic units. These antagonistic
theories have in common that they remain extremelyvague about the details of the neural representations
and the computations that are needed for a person to
actually understand or produce a spoken utterance. If
positions and claims are supported by computational
models, it is virtually always so that those models
operate on manually constructed discrete symbolic input
representations, and the models make no claims about
neurobiological plausibility. In the poster we will present
the results of a large-scale behavioral experiment aimed
at answering the question whether exemplars play a role
in comprehension as well as in production. Participants
were asked to shadow nonsense words of the form /
CVVVVVV-CV-PV/ (Mitterer & Ernestus, 2008), where
the vowel V in the central syllable could have normal
or somewhat lengthened duration; also the voiceless
plosive P that separates the second and third syllable can
have normal duration or be lengthened. Native speakers
of Dutch have several routes available for linking their
perception to the ensuing articulation. At the perception
side they may restrict processing to creating on-thefly
exemplars without a representation in the form of
discrete units, they might create a representation in the
form of discrete phonemic units, or they might access
their mental lexicon to find the most similar word
(Roelofs, 2004). For each of these routes we construct
plausible neural computational procedures that could
be used to control the speech production process in the
shadowing task. Using end-to-end computational models
(i.e., models that take acoustic speech signals as input
and produce audible speech as output) we simulate
the chronometric data and the accuracy with which
the stimuli were shadowed, in an attempt to explain
differences between participants in terms of different
routes. We will use the result to discuss potential
discrepancies between representations and processes
implied in functional (psycho)linguistic models of speech
comprehension and production on the one hand and a
detailed account of what is currently known about the
neural processes that support auditory processing of
speech signals and the production of spoken utterances.
Holger Mitterer and Mirjam Ernestus (2008) The
link between speech perception and production is
phonological and abstract: Evidence from the shadowing
task, Cognition 109, 168–173. Ardi Roelofs (2004) Error
Biases in Spoken Word Planning and Monitoring by
Aphasic and Nonaphasic Speakers: Comment on Rapp
and Goldrick (2000), Psychological Review Vol. 111, No.
2, 561–572.