English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Modelling multimodal interaction in language mediated eye gaze

Smith, A. C., Huettig, F., & Monaghan, P. (2012). Modelling multimodal interaction in language mediated eye gaze. Talk presented at the 13th Neural Computation and Psychology Workshop [NCPW13]. San Sebastian, Spain. 2012-07-12 - 2012-07-14.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Smith, Alastair Charles1, 2, Author           
Huettig, Falk1, 3, Author           
Monaghan, Padraic4, Author
Affiliations:
1Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society, ou_792545              
2International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL, ou_1119545              
3Donders Institute for Brain, Cognition and Behaviour, External Organizations, Nijmegen, NL, ou_55236              
4Department of Psychology, Lancaster University, Lancaster, UK, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Hub-and-spoke models of semantic processing which integrate modality specific information within a central resource have proven successful in capturing a range of neuropsychological phenomena (Rogers et al, 2004; Dilkina et al, 2008). Within our study we investigate whether the scope of the Hub-and-spoke architectural framework can be extended to capture behavioural phenomena in other areas of cognition. The visual world paradigm (VWP) has contributed significantly to our understanding of the information and processes involved in spoken word recognition. In particular it has highlighted the importance of non-linguistic influences during language processing, indicating that combined information from vision, phonology, and semantics is evident in performance on such tasks (see Huettig, Rommers & Meyer, 2011). Huettig & McQueen (2007) demonstrated that participants’ fixations to objects presented within a single visual display varied systematically according to their phonological, semantic and visual relationship to a spoken target word. The authors argue that only an explanation allowing for influence from all three knowledge types is capable of accounting for the observed behaviour. To date computational models of the VWP (Allopenna et al, 1998; Mayberry et al, 2009; Kukona et al, 2011) have focused largely on linguistic aspects of the task and have therefore been unable to offer explanations for the growing body of experimental evidence emphasising the influence of non-linguistic information on spoken word recognition. Our study demonstrates that an emergent connectionist model, based on the Hub-and-spoke models of semantic processing, which integrates visual, phonological and functional information within a central resource, is able to capture the intricate time course dynamics of eye fixation behaviour reported in Huettig & McQueen (2007). Our findings indicate that such language mediated visual attention phenomena can emerge largely due to the statistics of the problem domain and may not require additional domain specific processing constraints.

Details

show
hide
Language(s): eng - English
 Dates: 2012
 Publication Status: Not specified
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: -
 Degree: -

Event

show
hide
Title: the 13th Neural Computation and Psychology Workshop [NCPW13]
Place of Event: San Sebastian, Spain
Start-/End Date: 2012-07-12 - 2012-07-14

Legal Case

show

Project information

show

Source

show