English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Temporally distributed information gets optimally combined by change-based information processing

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Moazzezi, R., & Dayan, P. (2020). Temporally distributed information gets optimally combined by change-based information processing. Poster presented at 7th Computational and Systems Neuroscience Meeting (COSYNE 2010), Salt Lake City, UT, USA.


Cite as: https://hdl.handle.net/21.11116/0000-0007-4DD1-7
Abstract
Neural circuits are responsible for carrying out cortical computations. These computations consist of separating relevant from irrelevant information and noise in the incoming input (or stimuli). However, circuits do not receive all the information they need to accomplish their computations at once; rather, in many situations, the incoming input gets presented to the network over the course of a few hundreds of milliseconds. This raises the question of how the information that is provided at different times is integrated. One challenge in doing this is that the input is in general also affected by other nuisance parameters (irrelevant information) that might continually change during the time of integration and processing. Here we address this problem in the context of a hyperacuity task called the bisection task. In this task, an array of three parallel bars is presented to the subjects, who have to decide if the middle bar is closer to the right or the left bar. The signal (i.e. the relevant information) here is the sign of the deviation of the middle bar from the middle point of the array, and is fixed during a trial. Fixational eye movements, like micro-tremors, continually change the location of the array during each trial, and are therefore the source of the irrelevant information mentioned in the above paragraph. The duration of a trial is of the order of a few hundreds of milliseconds. This is therefore an excellent model task to study how neural circuits process the information that is provided gradually over time. We modelled this using a recurrent network inspired by primary visual cortical circuits. We have previously shown that in the presence of trial by trial variability of the overall location of the array of the three bars (but no variability within a trial), coding information by the early change of the network’s state (which we call Change-based Processing) could lead to near optimal performance (1). This method, which is superior to conventional methods of coding by attractor states, makes its decision based on the sign of the difference between two measurements of a scalar statistic of the neural activity (in this case, its centre of mass). The timings of these two measurements could be learned, but once learned are fixed and are independent of strength of the signal. Here we show that the same method successfully combines the information that the network receives over time, still performing near optimally. Theoretical analysis of this performance indicates that an eigenmode of a sub-matrix of the recurrent weight matrix is responsible for both extracting the relevant information, and combining it near optimally over time; this eigenmode also plays a key role in the subsequent evolution of the statistic whose change is the basis for the ultimate decision. We also demonstrate that the magnitude of the change of the statistic reflects the amount of information available in support of the decision.