hide
Free keywords:
-
Abstract:
The integration of multiple sensory cues pertaining to the same object is essential for precise and accurate perception. An optimal strategy is to weight these cues proportional to their reliability. Moreover, as reliability of sensory information may rapidly change, the perceptual weight assigned to each cue must also change dynamically. Recent studies showed that human observers apply this
principle when integrating low-level unisensory and multisensory signals, but evidence for highlevel
perception remains scarce. Here we asked if human observers dynamically reweight high-level visual cues during face recognition. We therefore had subjects (n¼6) identify one of two previously learned synthetic facial identities using form and motion, and varied form reliability (i.e., by making faces ‘‘older’’) on a trial-to-trial level. For each subject, we fitted psychometric functions to the proportion of identity choices in each condition. As predicted by optimal cue integration, the empirical combined variance did not differ from the optimal combined variance (p>0.2, t-test). Importantly, the reduced form reliability (p<0.01) led to a reweighting of the form cue (p<0.01). Our data thus suggest that humans not only integrate but also dynamically reweight high-level visual cues, such as facial form and motion, to yield a coherent percept of a
facial identity.