English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Visually grounded expectations influence semantic integration: An ERP (event related brain potentials) study on situated language

MPS-Authors
/persons/resource/persons1069

Weber,  Andrea
Adaptive Listening, MPI for Psycholinguistics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Drenhaus, H., Weber, A., & Crocker, M. (2011). Visually grounded expectations influence semantic integration: An ERP (event related brain potentials) study on situated language. Poster presented at XI International Conference on Cognitive Neuroscience, Mallorca, Spain.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0011-A01D-2
Abstract
Behavioral studies have shown that an appropriate visual target that is anticipated by a verb leads to visually grounded expectations concerning the following verbal argument (e.g. Weber & Crocker, 07). In a cross-modal ERP priming experiment, we seek to establish the integration costs (N400) of appr./inappr. target words (verbal arguments), in the presence/absence of an appropriate/inappropriate visual target. An auditory prime (‘The woman bakes’) was accompanied with pictures on a screen (the agent, an appr./inappr. verbal object (‘cake’ vs. ‘tree’) and two distractors). After the primes/pictures, participants had to perform a lexical decision task to visually presented nouns (‘pizza’ vs. ‘tree’ - crucially, the appropriate depicted object and lexical targets differ) where ERPs were measured. We found a centro-parietal modulated negativity (N400) between 350ms-550ms (baseline condition A>B>D>C). Comparison of A/B vs. C/D reveals a main effect of auditory prime (verbal information) reducing processing costs. Crucially, however, processing costs are increased on the target word when the contextually grounded expectations (A>B and C<D) are not met (strong N400). Our results extend previous studies showing that the online processing of situated language is related to context-expectancy (e.g. Kutas & Hillyard, 84), as evidenced by interaction and interference between visual/auditory input. Prime: "The woman bakes the" A. Scene-match: cake, Target-match: "pizza" (baseline condition) B. Scene-nomatch: tree, Target-match: "pizza" C. Scene-match: cake, Target-nomatch: "tree" D. Scene-nomatch: tree, Target-nomatch: "tree"