English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension

MPS-Authors
/persons/resource/persons198492

Kaufeld,  Greta
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

Ravenschlag,  Anna
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons1167

Meyer,  Antje S.
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

/persons/resource/persons198520

Martin,  Andrea E.
Language and Computation in Neural Systems, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons123625

Bosker,  Hans R.
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
Supplementary Material (public)
There is no public supplementary material available
Citation

Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.


Cite as: https://hdl.handle.net/21.11116/0000-0003-C214-B
Abstract
During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects