Deutsch
 
Benutzerhandbuch Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Meeting Abstract

Auditory and audiovisual specificity for processing communication signals in the superior temporal lobe

MPG-Autoren
/persons/resource/persons84132

Perrodin,  C
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84136

Petkov,  CI
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84063

Logothetis,  Nikos K
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84006

Kayser,  Christoph
Research Group Physiology of Sensory Integration, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Perrodin, C., Petkov, C., Logothetis, N. K., & Kayser, C. (2014). Auditory and audiovisual specificity for processing communication signals in the superior temporal lobe. In 15th International Multisensory Research Forum (IMRF 2014) (pp. 28).


Zitierlink: http://hdl.handle.net/21.11116/0000-0001-33C9-3
Zusammenfassung
Effective social interactions can depend upon the receiver combining vocal and facial content to form a coherent audiovisual representation of communication signals. Neuroimaging studies have identified face- or voice-sensitive areas in the primate temporal lobe, some of which have been proposed as candidate regions for face-voice integration. However, so far neurons in these areas have been primarily studied in their respective sensory modality. In addition, these higher-level sensory areas are typically not prominent in current models of multisensory processing, unlike early sensory and association cortices. Thus, it was unclear how audiovisual influences occur at the neuronal level within such regions, especially in comparison to classically defined multisensory regions in temporal association cortex. Here I will present data exploring auditory (voice) and visual (face) influences on neuronal responses to vocalizations, that were obtained using extracellular recordings targeting a voice-sensitive region of the anterior supratemporal plane and the neighboring superior-temporal sulcus (STS) in awake rhesus macaques. Our findings suggest that within the superior temporal lobe, neurons in voice-sensitive cortex specialize in the auditory analysis of vocal features while congruency-sensitive visual influences emerge to a greater extent in STS neurons. These results help clarify the audiovisual representation of communication signals at two stages of the sensory pathway in primate superior temporal regions, and are consistent with reversed gradients of functional specificity in unisensory vs multisensory processing along their respective hierarchies.