English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

The role of co-speech gestures in retrieval and prediction during naturalistic multimodal narrative processing

MPS-Authors
/persons/resource/persons19855

Meyer,  Lars       
Max Planck Research Group Language Cycles, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Phoniatrics and Pedaudiology, Münster University, Germany;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Osorio_2023.pdf
(Any fulltext), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Osorio, S., Straube, B., Meyer, L., & He, Y. (2024). The role of co-speech gestures in retrieval and prediction during naturalistic multimodal narrative processing. Language, Cognition and Neuroscience, 39(3), 367-382. doi:10.1080/23273798.2023.2295499.


Cite as: https://hdl.handle.net/21.11116/0000-000E-3563-7
Abstract
During daily communication, visual cues such as gestures accompany the speech signal and facilitate semantic processing. However, how gestures impact lexical retrieval and semantic prediction, especially in a naturalistic setting, remains unclear. Here, participants watched a naturalistic multimodal narrative, where an actor narrated a story and spontaneously produced co-speech gestures. For all content words, word frequency and lexical surprisal were regressed against the EEG using temporal response functions (TFRs), which were fitted separately, additively, and interactively for words accompanied and not accompanied by gestures. Results from our analyses suggest a robust modulation effect of gesture on the frequency-dependent regression N400. Besides, we also observed some evidence of modulative effect of gesture on the surprisal-N400 effect based on the single-predictor model. Our finding thus suggests that, on a neural level, the presence of co-speech gestures facilitates lexical retrieval and potentially semantic prediction during the processing of naturalistic multimodal stimuli.