Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Speakers’ gestures predict the meaning and perception of iconicity in signs

MPG-Autoren
/persons/resource/persons84591

Ortega,  Gerardo
Center for Language Studies, External Organization;
Other Research, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons142

Ozyurek,  Asli
Center for Language Studies, External Organization;
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

Ortega_Schiefner_Ozyurek_2017a.pdf
(Verlagsversion), 350KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Ortega, G., Schiefner, A., & Ozyurek, A. (2017). Speakers’ gestures predict the meaning and perception of iconicity in signs. In G. Gunzelmann, A. Howe, & T. Tenbrink (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 889-894). Austin, TX: Cognitive Science Society.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002D-4512-8
Zusammenfassung
Sign languages stand out in that there is high prevalence of
conventionalised linguistic forms that map directly to their
referent (i.e., iconic). Hearing adults show low performance
when asked to guess the meaning of iconic signs suggesting
that their iconic features are largely inaccessible to them.
However, it has not been investigated whether speakers’
gestures, which also share the property of iconicity, may
assist non-signers in guessing the meaning of signs. Results
from a pantomime generation task (Study 1) show that
speakers’ gestures exhibit a high degree of systematicity, and
share different degrees of form overlap with signs (full,
partial, and no overlap). Study 2 shows that signs with full
and partial overlap are more accurately guessed and are
assigned higher iconicity ratings than signs with no overlap.
Deaf and hearing adults converge in their iconic depictions
for some concepts due to the shared conceptual knowledge
and manual-visual modality.