Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions

Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Drijvers, Linda1, 2, Autor           
Ozyurek, Asli1, 3, Autor           
Affiliations:
1Center for Language Studies , External Organizations, ou_55238              
2Donders Institute for Brain, Cognition and Behaviour, External Organizations, ou_55236              
3Research Associates, MPI for Psycholinguistics, Max Planck Society, Wundtlaan 1, 6525 XD Nijmegen, NL, ou_2344700              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Perception: Speech Perception and Audiovisual Integration
 Zusammenfassung: Face-to-face communication consists of integrating speech and visual input, such as co-speech gestures. Iconic gestures (e.g. a drinking gesture) can enhance speech comprehension, especially when speech is difficult to comprehend, such as in noise (e.g. Holle et al., 2010) or in non-native speech comprehension (e.g. Sueyoshi & Hardison, 2005). Previous behavioral and neuroimaging studies have argued that the integration of speech and gestures is stronger when speech intelligibility decreases (e.g. Holle et al., 2010), but that in clear speech, non-native listeners benefit more from gestures than native listeners (Dahl & Ludvigson, 2014; Sueyoshi & Hardison, 2005). So far, the neurocognitive mechanisms of how non-native speakers integrate speech and gestures in adverse listening conditions remain unknown. We investigated whether high-proficient non-native speakers of Dutch make use of iconic co-speech gestures as much as native speakers during clear and degraded speech comprehension. In an EEG study, native (n = 23) and non-native (German, n = 23) speakers of Dutch watched videos of an actress uttering Dutch action verbs. Speech was presented either as clear or degraded by applying noise-vocoding (6-band), and accompanied by a matching or mismatching iconic gesture. This allowed us to calculate both the effects of speech degradation and semantic congruency of the gesture on the N400 component. The N400 was taken as an index of semantic integration effort (Kutas & Federmeier, 2011). In native listeners, N400 amplitude was sensitive to mismatches between speech and gesture and degradation; the most pronounced N400 was found in response to degraded speech and a mismatching gesture (DMM), followed by degraded speech and a matching gesture (DM), clear speech and a mismatching gesture (CMM), and clear speech and a matching gesture (CM) (DMM>DM>CMM>CM, all p < .05). In non-native speakers, we found a difference between CMM and CM but not DMM and DM. However, degraded conditions differed from clear conditions (DMM=DM>CMM>CM, all significant comparisons p < .05). Directly comparing native to non-native speakers, the N400 effect (i.e. the difference between CMM and CM / DMM and DM) was greater for non-native speakers in clear speech, but for native speakers in degraded speech. These results provide further evidence for the claim that in clear speech, non-native speakers benefit more from gestural information than native speakers, as indexed by a larger N400 effect for mismatch manipulation. Both native and non-native speakers show integration effort during degraded speech comprehension. However, native speakers require less effort to recognize auditory cues in degraded speech than non-native speakers, resulting in a larger N400 for degraded speech and a mismatching gesture for natives than non-natives. Conversely, non-native speakers require more effort to resolve auditory cues when speech is degraded and can therefore not benefit as much from auditory cues to map the semantic information from gesture to as native speakers. In sum, non-native speakers can benefit from gestural information in speech comprehension more than native listeners, but not when speech is degraded. Our findings suggest that the native language of the listener modulates multimodal semantic integration in adverse listening conditions.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2016
 Publikationsstatus: Keine Angabe
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016)
Veranstaltungsort: London, UK
Start-/Enddatum: 2016-08-17 - 2016-08-20

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: