English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Language prediction in multimodal contexts: The contribution of iconic gestures to anticipatory sentence comprehension

Hintz, F., Strauß, A., Khoe, Y., & Holler, J. (2023). Language prediction in multimodal contexts: The contribution of iconic gestures to anticipatory sentence comprehension. OSF Preprints. doi:10.17605/OSF.IO/679TM.

Item is

Files

show Files
hide Files
:
Hintz_etal_2023_preprint.pdf (Preprint), 1024KB
Name:
Hintz_etal_2023_preprint.pdf
Description:
-
OA-Status:
Green
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
2023
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Hintz, Florian1, Author           
Strauß, Antje, Author
Khoe, Yung, Author
Holler, Judith2, Author           
Affiliations:
1Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society, ou_792545              
2Communication in Social Interaction, Radboud University Nijmegen, External Organizations, ou_3055481              

Content

show
hide
Free keywords: -
 Abstract: There is a growing body of research demonstrating that during comprehension, language users
predict upcoming information. Prediction has been argued to facilitate dialog in that listeners try to
predict what the speaker will say next to be able to plan their own utterance early. Such behavior may
enable smooth transitions between turns in conversation. In face-to-face dialog, speakers produce a
multitude of visual signals, such as manual gestures, in addition to speech. Previous studies have shown
that comprehenders integrate semantic information from speech and corresponding iconic gestures when
these are presented simultaneously. However, in natural conversation, iconic gestures often temporally
precede their corresponding speech units with substantial lags. Given the temporal lags in gesture-
speech timing and the predictive nature of language comprehension, a recent theoretical framework
proposed that listeners exploit iconic gestures in the service of predicting upcoming information. The
proposed study aims to test this proposal. We will record electroencephalogram from 80 Dutch adults
while they are watching videos of an actress producing discourses. The stimuli consist of an
introductory and a target sentence; the latter contains a target noun. Depending on the preceding
discourse, the target noun is either predictable or not. Each target noun is paired with an iconic gesture
whose presentation in the video is timed such that the gesture stroke precedes the onset of the spoken
target either by 520 ms (earlier condition) or by 130 ms (later condition). Analyses of event-related
potentials preceding and following target onset will reveal whether and to what extent targets were pre-
activated by iconic gestures. If the findings reveal support for the notion that iconic co-speech gestures
contribute to predictive language comprehension, they lend support for the recent theoretical framework
of face-to-face conversation and offer one possible explanation for the smooth transitions between turns
in natural dialog.

Details

show
hide
Language(s): eng - English
 Dates: 2023-02-23
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: No review
 Identifiers: DOI: 10.17605/OSF.IO/679TM
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: OSF Preprints
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -