English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Testing alternative architectures for multimodal integration during spoken language processing in the visual world

MPS-Authors
/persons/resource/persons38015

Smith,  Alastair Charles
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons199478

Monaghan,  Padraic
Research Associates, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons79

Huettig,  Falk
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Smith, A. C., Monaghan, P., & Huettig, F. (2016). Testing alternative architectures for multimodal integration during spoken language processing in the visual world. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002B-9CFD-F
Abstract
Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks