日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Sensory modality of input influences encoding of motion events in speech but not co-speech gestures

MPS-Authors
/persons/resource/persons217457

Mamus,  Ezgi
Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons142

Ozyurek,  Asli
Center for Language Studies , External Organizations;
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
付随資料 (公開)
There is no public supplementary material available
引用

Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T., Fitch, C., Lamm, H., Leder, & K., Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.


引用: https://hdl.handle.net/21.11116/0000-0008-7D5B-7
要旨
Visual and auditory channels have different affordances and
this is mirrored in what information is available for linguistic
encoding. The visual channel has high spatial acuity, whereas
the auditory channel has better temporal acuity. These
differences may lead to different conceptualizations of events
and affect multimodal language production. Previous studies of
motion events typically present visual input to elicit speech and
gesture. The present study compared events presented as audio-
only, visual-only, or multimodal (visual+audio) input and
assessed speech and co-speech gesture for path and manner of
motion in Turkish. Speakers with audio-only input mentioned
path more and manner less in verbal descriptions, compared to
speakers who had visual input. There was no difference in the
type or frequency of gestures across conditions, and gestures
were dominated by path-only gestures. This suggests that input
modality influences speakers’ encoding of path and manner of
motion events in speech, but not in co-speech gestures.