English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Efficient Coding of Multisensory or Multimodality Inputs

Zhaoping, L. (2016). Efficient Coding of Multisensory or Multimodality Inputs. Talk presented at AVA Christmas Meeting 2016. London, UK. 2016-12-19.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Zhaoping, L1, Author           
Affiliations:
1External Organizations, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: We combine inputs from multiple sensory modalities, for example, vision, audition, and touch, and also from multiple unisensory cues, for example, binocular disparity and motion parallax information about visual depth. Neurons’ preferences for distinct features are often congruent: For example, medial superior temporal neurons are tuned to the heading direction of self-motion based on optic flow or vestibular inputs, with preferred directions that frequently match between modalities (Gu, Angelaki, & DeAngelis, 2008). Such matches make cue integration straightforward. However, for many medial superior temporal neurons, the feature preferences are different or opposite. Similarly, the preferences of middle temporal cortex neurons to disparity- and parallax-based depth can be either congruent or opposite (Nadler et al., 2013). I propose that this achieves efficient coding given incomplete redundancy in the input sources (Barlow, 1961). Efficiency requires creating representations (also called bases) in which the inputs are decorrelated. For two sources, this implies two bases in which inputs that sample the features from the two sources are either (weighted and) added or (weighted and) subtracted; these are the genesis respectively of congruently and oppositely tuned cells. The exact forms (i.e., relative weighting of the sources) of, and neural sensitivities to, individual bases should depend on, and adapt to, the statistical properties of the inputs (e.g., the correlation between the sources and the signal-to-noise ratios). Coding of visual-vestibular heading direction and stereoscopic-motion-parallax depth both become analogous to efficient stereo coding (Li & Atick, 1994). Generalization to more than two senses or modalities is straightforward.

Details

show
hide
Language(s):
 Dates: 2017-08
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1177/0301006617710756
 Degree: -

Event

show
hide
Title: AVA Christmas Meeting 2016
Place of Event: London, UK
Start-/End Date: 2016-12-19
Invited: Yes

Legal Case

show

Project information

show

Source 1

show
hide
Title: Perception
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: London : Pion Ltd.
Pages: - Volume / Issue: 46 (10) Sequence Number: - Start / End Page: 1206 Identifier: ISSN: 0301-0066
CoNE: https://pure.mpg.de/cone/journals/resource/954925509369