Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Visual modulation of neurons in voice-sensitive auditory cortex and the superior-temporal sulcus

MPG-Autoren
/persons/resource/persons84132

Perrodin,  C
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84006

Kayser,  C
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84063

Logothetis,  NK
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84136

Petkov,  CI
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen

Link
(beliebiger Volltext)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Perrodin, C., Kayser, C., Logothetis, N., & Petkov, C. (2013). Visual modulation of neurons in voice-sensitive auditory cortex and the superior-temporal sulcus. Poster presented at Tucker-Davis Technologies Symposium on Advances and Perspectives in Auditory Neurophysiology (APAN 2013), San Diego, CA, USA.


Zitierlink: https://hdl.handle.net/21.11116/0000-0001-4E39-9
Zusammenfassung
Effective social interactions can depend upon the receiver combining vocal and facial content to form a coherent audiovisual representation of communication signals. Neuroimaging studies have identified face- or voice-sensitive areas in the primate brain, some of which have been proposed as candidate regions for face-voice integration. However, it was unclear how audiovisual influences occur at the neuronal level within such regions and in comparison to classically defined multisensory regions in temporal association cortex.
Here, we characterize visual influences from facial content on neuronal responses to vocalizations from a voice-sensitive region in the anterior supratemporal plane (STP) and the anterior superior-temporal sulcus (STS). Using dynamic face and voice stimuli, we recorded individual units from both regions in the right hemisphere of two awake Rhesus macaques. To test the specificity of visual influences to behaviorally relevant stimuli, we included a set of audiovisual control stimuli, in which a voice was paired with a mismatched visual facial context.
Within the STP, our results show auditory sensitivity to various vocal features, which was not evident in STS units. We newly identify a functionally distinct neuronal subpopulation in the STP that carries the area's sensitivity to voice-identity related characteristics. Audio-visual interactions were prominent in both areas, with direct crossmodal convergence being more prevalent in the STS. Moreover, visual influences modulated the responses of STS neurons with greater specificity, such as being more often associated with congruent voice-face stimulus pairings than STP neurons.
Our results show that voice-sensitive cortex specializes in auditory analysis of vocal features while congruency-sensitive visual influences emerge to a greater extent in the STS. Together, our results highlight the transformation of audio-visual representations of communication signals across successive levels of the multisensory processing hierarchy in the primate temporal lobe.