English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Language in the visual modality: Co-speech Gesture and Sign

Ozyurek, A., & Ortega, G. (2016). Language in the visual modality: Co-speech Gesture and Sign. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Ozyurek, Asli1, 2, Author           
Ortega, Gerardo2, Author           
Affiliations:
1Research Associates, MPI for Psycholinguistics, Max Planck Society, Wundtlaan 1, 6525 XD Nijmegen, NL, ou_2344700              
2Center for Language Studies , External Organizations, ou_55238              

Content

show
hide
Free keywords: -
 Abstract: As humans, our ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures used in spoken languages. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression Co-speech gestures, though non-linguistic, are produced and perceived in tight semantic and temporal integration with speech. Thus, language—in its primary face-to-face context (both phylogenetically and ontogenetically) is a multimodal phenomenon. In fact visual modality seems to be a more common way of communication than speech -when we consider both deaf and hearing individuals. Most research in language, however, has focused mostly on spoken/written language and has rarely considered the visual context it is embedded in to understand our linguistic capacity. This talk give a brief review on what know so far about what the visual expressive resources of language look like in both spoken and sign languages and their role in communication and cognition- broadening our scope of language. We will argue, based on these recent findings, that our models of language need to take visual modes of communication into account and provide a unified framework for how semiotic and expressive resources of the visual modality are recruited both for spoken and sign languages and their consequences for processing-also considering their neural underpinnings

Details

show
hide
Language(s): eng - English
 Dates: 2016
 Publication Status: Not specified
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior
Place of Event: Berg en Dal, The Netherlands
Start-/End Date: 2016-07-03 - 2016-07-14

Legal Case

show

Project information

show

Source

show