English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information

MPS-Authors
/persons/resource/persons188997

Drijvers,  Linda
Communication in Social Interaction, Radboud University Nijmegen, External Organizations;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Other Research, MPI for Psycholinguistics, Max Planck Society;
The Communicative Brain, MPI for Psycholinguistics, Max Planck Society;

External Ressource
No external resources are shared
Supplementary Material (public)
There is no public supplementary material available
Citation

Drijvers, L., Jensen, O., & Spaak, E. (2020). Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Human Brain Mapping. Advance online publication. doi:10.1002/hbm.25282.


Cite as: http://hdl.handle.net/21.11116/0000-0006-9CEB-2
Abstract
During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1440 Hz refresh rate). Integration ease was manipulated by auditory factors (clear/degraded speech) and visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual – fauditory = 7 Hz), specifically when integration was easiest (i.e., when speech was clear and accompanied by a congruent gesture). This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.