English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Efficient Coding of Multisensory or Multimodality Inputs

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Zhaoping, L. (2016). Efficient Coding of Multisensory or Multimodality Inputs. Talk presented at AVA Christmas Meeting 2016. London, UK. 2016-12-19.


Cite as: https://hdl.handle.net/21.11116/0000-0002-C24B-F
Abstract
We combine inputs from multiple sensory modalities, for example, vision, audition, and touch, and also from multiple unisensory cues, for example, binocular disparity and motion parallax information about visual depth. Neurons’ preferences for distinct features are often congruent: For example, medial superior temporal neurons are tuned to the heading direction of self-motion based on optic flow or vestibular inputs, with preferred directions that frequently match between modalities (Gu, Angelaki, & DeAngelis, 2008). Such matches make cue integration straightforward. However, for many medial superior temporal neurons, the feature preferences are different or opposite. Similarly, the preferences of middle temporal cortex neurons to disparity- and parallax-based depth can be either congruent or opposite (Nadler et al., 2013). I propose that this achieves efficient coding given incomplete redundancy in the input sources (Barlow, 1961). Efficiency requires creating representations (also called bases) in which the inputs are decorrelated. For two sources, this implies two bases in which inputs that sample the features from the two sources are either (weighted and) added or (weighted and) subtracted; these are the genesis respectively of congruently and oppositely tuned cells. The exact forms (i.e., relative weighting of the sources) of, and neural sensitivities to, individual bases should depend on, and adapt to, the statistical properties of the inputs (e.g., the correlation between the sources and the signal-to-noise ratios). Coding of visual-vestibular heading direction and stereoscopic-motion-parallax depth both become analogous to efficient stereo coding (Li & Atick, 1994). Generalization to more than two senses or modalities is straightforward.