English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Preprint

Audiovisual Perception of Lexical Stress: Beat Gestures are stronger Visual Cues for Lexical Stress than visible Articulatory Cues on the Face

MPS-Authors
/persons/resource/persons250176

Bujok,  Ronny
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons1167

Meyer,  Antje S.
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons123625

Bosker,  Hans R.
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
Supplementary Material (public)
There is no public supplementary material available
Citation

Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). Audiovisual Perception of Lexical Stress: Beat Gestures are stronger Visual Cues for Lexical Stress than visible Articulatory Cues on the Face. PsyArXiv Preprints. doi:10.31234/osf.io/y9jck.


Cite as: https://hdl.handle.net/21.11116/0000-000A-8260-6
Abstract
Human communication is inherently multimodal. Auditory speech but also visual cues can be used to understand another talker. This is especially known about the perception of segments of speech (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. This study investigated the influence of different visual information (e.g., facial cues & beat gestures) on the perception of lexical stress and found that beat gestures, but not facial cues affect lexical stress perception. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts and expand our understanding of audiovisual speech perception and integration.