Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Meeting Abstract

Learning to combine arbitrary signals from vision and touch


Ernst,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Ernst, M., & Jäkel, F. (2003). Learning to combine arbitrary signals from vision and touch. In 4th International Multisensory Research Forum (IMRF 2003).

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-DC73-F
When different perceptual signals of the same physical property are integrated?e.g., the size of an object, which can be seen and felt?they form a more reliable sensory estimate. This however implies that the sensory system already knows which signals belong together and how they are related. In a Bayesian model of cue integration this prior knowledge can be made explicit. Here, we examine whether such a relationship between two arbitrary sensory signals from vision and touch can be learned from their statistical co-occurrence such that they become integrated. In the Bayesian model this means changing the prior distribution over the stimuli. To this end, we trained subjects with stimuli that are usually uncorrelated in the world?the luminance of an object (visual signal) and its stiffness (haptic signal). In the training phase we presented only combinations of these signals, which were highly correlated. Before and after training we measured discrimination performance with distributions of stimuli, which were either congruent with the correlation during training or incongruent. The incongruent stimuli came form an anti-correlated distribution compared to training. If subjects were sensitive to the correlation between the signals then we expect to see a change in their prior knowledge about what combinations of stimuli usually to encounter. Accordingly, this should change their discrimination performance between pre- and post-test. We found a significant interaction between the two factors pre/post-test and congruent/incongruent. After training, discrimination thresholds for the incongruent stimuli are increased relative to the thresholds for congruent stimuli, suggesting that subjects learned to combine the two signals effectively.