English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Audiovisual perception of lexical stress: Beat gestures and articulatory cues

MPS-Authors
/persons/resource/persons250176

Bujok,  Ronny
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons1167

Meyer,  Antje S.
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons123625

Bosker,  Hans R.
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Bujok et al._2024_L&S.docx
(Postprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Bujok, R., Meyer, A. S., & Bosker, H. R. (in press). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech.


Cite as: https://hdl.handle.net/21.11116/0000-000F-4A7C-4
Abstract
Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.