日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

Face Distinctiveness can be Modulated by Cross-Modal Interaction with Auditory Stimuli

MPS-Authors
/persons/resource/persons83840

Bülthoff,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Bülthoff, I., & Newell, F. (2006). Face Distinctiveness can be Modulated by Cross-Modal Interaction with Auditory Stimuli. Poster presented at 9th Tübingen Perception Conference (TWK 2006), Tübingen, Germany.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-D29B-B
要旨
In this study we ask whether visually typical faces can become perceptually distinctive when they are paired to auditory stimuli that are distinctive. In a first set of experiments (B¨ulthoff
Newell, ECVP 2004), we had investigated the effect of voice distinctiveness on face recognition.
Memory for a face can be influenced by the distinctiveness of an utterance to which
it has been associated. Furthermore, recognition of a familiar face can be primed by a paired
utterance. These findings suggest that there is a tight, cross-modal coupling between the faces
presented and the associated utterances and that face distinctiveness can be influenced by crossmodal
interaction with auditory stimuli like voices. In another set of experiment, we used instrumental
sounds instead of voices and showed that arbitrary auditory stimuli could also affect
memory for faces. Faces that had been paired with distinctive instrumental sounds were better
recognized in an old/new task than faces paired to typical instrumental sounds. Here we
investigated whether these instrumental sounds can also prime face recognition although these
auditory stimuli are not associated to faces naturally as voices are. Our results suggest that this
is not the case; arbitrary audio stimuli do not prime recognition of faces. This finding suggests
that attentional differences may have resulted in better recognition performance for faces paired
to distinctive sounds in the old/new task. It also suggests that utterances are easier to associate
closely to faces than arbitrary sounds. In a last set of experiments we investigated whether the
voice priming effect shown in the first set of experiments might be based on the use of different
first names in each utterance. Thus, we asked whether semantic rather than perceptual information
was determinant in the used utterances. We repeated the priming experiment using the
same voice stimuli, but name information was removed. The results show that there is still a
significant priming effect of voices to faces, albeit weaker than in the full voice experiment.
The semantic information related to the first name helps but is not be decisive for the priming
effect of voices on face recognition.