日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials

MPS-Authors
/persons/resource/persons142

Ozyurek,  Asli
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
Neurobiology of Language Group, MPI for Psycholinguistics, Max Planck Society;
Language in Action , MPI for Psycholinguistics, Max Planck Society;

Kita,  Sotaro
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons69

Hagoort,  Peter
Neurobiology of Language Group, MPI for Psycholinguistics, Max Planck Society;
FC Donders Centre for Cognitive Neuroimaging, external;
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

ozyurek_2007_on-line.pdf
(出版社版), 223KB

付随資料 (公開)
There is no public supplementary material available
引用

Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-1AD0-A
要旨
During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.