English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Modelling the effect of literacy on multimodal interactions during spoken language processing in the visual world

MPS-Authors
/persons/resource/persons38015

Smith,  Alastair Charles
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons79

Huettig,  Falk
Psychology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effect of literacy on multimodal interactions during spoken language processing in the visual world. Talk presented at Tagung experimentell arbeitender Psychologen. [TEAP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000E-FEAE-8
Abstract
Recent empirical evidence suggests that language-mediated eye gaze around the visual world varies across individuals and is partly determined by their level of formal literacy training. Huettig, Singh & Mishra (2011) showed that unlike high-literate individuals, whose eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display, low-literate individuals eye gaze was not tightly locked to phonological overlap in the speech signal but instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behaviour is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on syllabic structure. This hypothesis was tested using an emergent connectionist model, based on the Hub-and-spoke models of semantic processing (Dilkina et al, 2008), that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behaviour similar to those observed between high and low literates emerge when the model is trained on either a speech signal segmented by phoneme (i.e. high-literates) or by syllable (i.e. low-literates).