English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration

Smith, A. C., Monaghan, P., & Huettig, F. (2017). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Journal of Memory and Language, 93, 276-303. doi:10.1016/j.jml.2016.08.005.

Item is

Files

show Files
hide Files
:
Smith_Monaghan_Huettig_JML_2017.pdf (Publisher version), 2MB
Name:
Smith_Monaghan_Huettig_JML_2017.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Smith, Alastair Charles1, Author           
Monaghan, Padraic2, Author           
Huettig, Falk1, 3, Author           
Affiliations:
1Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society, ou_792545              
2Department of Psychology, Lancaster University, ou_persistent22              
3The Cultural Brain, MPI for Psycholinguistics, Max Planck Society, Wundtlaan 1, 6525 XD Nijmegen, NL, ou_2579693              

Content

show
hide
Free keywords: visual world paradigm, visual attention, spoken word recognition, connectionist modelling, multimodal processing
 Abstract: Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.

Details

show
hide
Language(s): eng - English
 Dates: 2016-08-262017
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1016/j.jml.2016.08.005
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Memory and Language
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: New York : Academic Press
Pages: - Volume / Issue: 93 Sequence Number: - Start / End Page: 276 - 303 Identifier: ISSN: 0749-596X
CoNE: https://pure.mpg.de/cone/journals/resource/954928495417