English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Sensory modality of input influences encoding of motion events in speech but not co-speech gestures

MPS-Authors
/persons/resource/persons217457

Mamus,  Ezgi
Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons142

Ozyurek,  Asli
Center for Language Studies , External Organizations;
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations;

External Resource
No external resources are shared
Fulltext (public)
Supplementary Material (public)
There is no public supplementary material available
Citation

Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.


Cite as: http://hdl.handle.net/21.11116/0000-0008-7D5B-7
Abstract
Visual and auditory channels have different affordances and this is mirrored in what information is available for linguistic encoding. The visual channel has high spatial acuity, whereas the auditory channel has better temporal acuity. These differences may lead to different conceptualizations of events and affect multimodal language production. Previous studies of motion events typically present visual input to elicit speech and gesture. The present study compared events presented as audio- only, visual-only, or multimodal (visual+audio) input and assessed speech and co-speech gesture for path and manner of motion in Turkish. Speakers with audio-only input mentioned path more and manner less in verbal descriptions, compared to speakers who had visual input. There was no difference in the type or frequency of gestures across conditions, and gestures were dominated by path-only gestures. This suggests that input modality influences speakers’ encoding of path and manner of motion events in speech, but not in co-speech gestures.