Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses


Lowitszch,  Svenja
Optical 3D Metrology, Leuchs Division, Max Planck Institute for the Science of Light, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Brefczynski-Lewis, J., Lowitszch, S., Parsons, M., Lemieux, S., & Puce, A. (2009). Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses. BRAIN TOPOGRAPHY, 21(3-4), 193-206. doi:10.1007/s10548-009-0093-6.

Cite as: https://hdl.handle.net/11858/00-001M-0000-002D-6BE5-D
In an everyday social interaction we automatically integrate another's facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input-a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion.