English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Cross-modal face distinctiveness

MPS-Authors
/persons/resource/persons83840

Bülthoff,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Bülthoff, I., & Newell, F. (2007). Cross-modal face distinctiveness. In 8th International Multisensory Research Forum (IMRF 2007).


Cite as: https://hdl.handle.net/21.11116/0000-0003-F4BF-3
Abstract
Face distinctiveness has been investigated extensively in the visual domain. The face space framework (Valentine, 1991) has been widely used to explain the underlying mechanisms. We investigated whether face distinctiveness could also depend on multi-sensory information.
Participants first learned a set of typical faces which were divided into two sets. Each face of one set was presented with a distinctive auditory stimulus while the faces of the other set were paired with typical auditory stimuli. Auditory stimuli were either utterances or instrumental sounds. Following the training session, participants performed either an old/new visual recognition task or the same visual recognition task preceded by an auditory priming stimulus.
The results suggest that only utterances but not arbitrary sounds can be associated closely to faces and allow an interaction between visual and auditory inputs that results in changes of the perceptual quality of the faces, i.e. faces become distinctive when they have been associated to distinctive utterances. Furthermore it suggests that the face space framework should accommodate multi-sensory input.