hide
Free keywords:
-
Abstract:
Humans integrate auditory and visual spatial cues to locate objects. Generally, location judgments
are dominated by vision because observers localize an auditory cue close to a visual cue
even if they have been instructed to ignore the latter (ventriloquist effect). A recent model of
multisensory integration proposes that the ventriloquist effect is governed by two principles:
First, spatially discrepant cues are only integrated if the observer infers that both cues stem
from one object (principle of causal inference). Second, if the inference results in an assumption
that both cues originate from one object the cues are integrated by weighting them
according to their relative reliability (principle of Bayes-optimal cue weighting). Thus, the
bimodal estimate of the object`s location has a higher reliability than each of the unisensory
estimates per se. In order to test this model, 26 subjects were presented with spatial auditory
(HRTF-convolved white noise) and visual cues (cloud of dots). The 5x5x5 factorial design
manipulated (1) the auditory cue location, (2) the visual cue location and (3) the reliability
of the visual cue via the width of the cloud of dots. Subjects were instructed to locate the
auditory cue while ignoring the visual cue and to judge the spatial unity of both cues. In line
with the principle of causal inference results showed that the ventriloquist effect was weaker
and unity judgments were reduced for larger audiovisual discrepancies. In case of small spatial
discrepancies the ventriloquist effect was weaker at low levels of visual reliability implying a
Bayes-optimal strategy of cue weighting only if a common cause of both cues was assumed.
A probabilistic model incorporating the principles of causal inference and Bayes-optimal cue
weighting accurately fitted the behavioral data. Overall, the pattern of results suggested that
both principles describe important processes governing multisensory integration.