English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Multimodal Sensor Fusion in the Human Brain

MPS-Authors
/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Bülthoff, H. (2003). Multimodal Sensor Fusion in the Human Brain. Talk presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003). Las Vegas, NV, USA. 2003-10-27 - 2003-10-31.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-DB17-9
Abstract
The question of how we identify and interact with three-dimensional objects, given that only two-dimensional patterns of light are received by the retina or camera target, has provided fruitful labor for philosophers, psychologists, neuroscientists and engineers for many years. The research philosophy in our perception-action laboratory at the MPI in Tübingen is to study human information processing in a closed perception-action loop, in which the action of the observer will also change the input to our senses. In psychophysical studies we could show that humans can integrate multimodal sensory information in a statistically optimal way, in which cues are weighted according to their reliability. A better understanding of multimodal sensor fusion will allow us to build better systems for medical or entertainment robots in which the design effort for visual, auditory, haptic, vestibular and proprioceptive simulation is influenced by the weight of each cue in multimodal sensor fusion.