hide
Free keywords:
-
Abstract:
It has been shown repeatedly that visual and inertial sensory information on the heading of self-motion is fused by the CNS in a manner consistent with Bayesian Integration (BI). However, a few studies report violations of BI predictions. This dichotomy in experimental findings previously led us to develop a Causal Inference model for multisensory heading estimation, which could account for different strategies of processing multisensory heading information, based on discrepancies between the heading of the visual and inertial cues. Surprisingly, the results of an assessment of this model showed that multisensory heading estimates were consistent with BI regardless of any discrepancy. Here, we hypothesized that Causal Inference is a slow-top down process, and that heading estimates for discrepant cues show less consistency with BI when motion duration increases. Six participants were presented with unisensory visual and inertial horizontal linear motions with headings ranging between ±180°, and combinations thereof with discrepancies up to ±90°. Motion profiles followed a single period of a raised cosine bell with a maximum velocity of 0.3m/s, and had durations of two, four, and six seconds. For each stimulus, participants provided an estimate of the heading of self-motion. In general, the results showed that the probability that heading estimates are consistent with BI decreases as a function of stimulus duration, consistent with the hypothesis. We conclude that BI is likely to be a default mode of processing multisensory heading information, and that Causal Inference is a slow top-down process that interferes only given enough time.