日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

Neuroperception: Facial expressions linked to monkey calls

MPS-Authors
/persons/resource/persons83932

Ghazanfar,  AA
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84063

Logothetis,  NK
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Ghazanfar, A., & Logothetis, N. (2003). Neuroperception: Facial expressions linked to monkey calls. Nature, 423(6943), 937-938. doi:10.1038/423937a.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-DC45-8
要旨
The perception of human speech can be enhanced by a combination of auditory and visual signals1, 2. Animals sometimes accompany their vocalizations with distinctive body postures and facial expressions3, although it is not known whether their interpretation of these signals is unified. Here we use a paradigm in which 'preferential looking' is monitored to show that rhesus monkeys (Macaca mulatta), a species that communicates by means of elaborate facial and vocal expression4, 5, 6, 7, are able to recognize the correspondence between the auditory and visual components of their calls. This crossmodal identification of vocal signals by a primate might represent an evolutionary precursor to humans' ability to match spoken words with facial articulation.