English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension

Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

Item is

Files

show Files
hide Files
:
Drijvers_Ozyurek_2017.pdf (Publisher version), 563KB
Name:
Drijvers_Ozyurek_2017.pdf
Description:
-
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Drijvers, Linda1, 2, 3, 4, Author              
Ozyurek, Asli1, 3, 5, 6, Author              
Affiliations:
1Center for Language Studies , External Organizations, ou_55238              
2International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL, ou_1119545              
3Donders Institute for Brain, Cognition and Behaviour, External Organizations, ou_55236              
4The Communicative Brain, MPI for Psycholinguistics, Max Planck Society, Wundtlaan 1, 6525 XD Nijmegen, NL, ou_3275695              
5Research Associates, MPI for Psycholinguistics, Max Planck Society, Wundtlaan 1, 6525 XD Nijmegen, NL, ou_2344700              
6Multimodal Language and Cognition, Radboud University Nijmegen, External Organizations, ou_3055480              

Content

show
hide
Free keywords: -
 Abstract: Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.

Details

show
hide
Language(s): eng - English
 Dates: 201620162017
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1044/2016_JSLHR-H-16-0101
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Speech, Language, and Hearing Research
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Rockville, MD : American Speech-Language-Hearing Association
Pages: - Volume / Issue: 60 Sequence Number: - Start / End Page: 212 - 222 Identifier: ISSN: 1092-4388
CoNE: https://pure.mpg.de/cone/journals/resource/954927548270