English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Change-based inference in attractor nets: Linear analysis

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Moazzezi, R., & Dayan, P. (2009). Change-based inference in attractor nets: Linear analysis. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2009), Salt Lake City, UT, USA. doi:10.3389/conf.neuro.06.2009.03.020.


Cite as: https://hdl.handle.net/21.11116/0000-0005-0E90-9
Abstract
A conventional view of information processing by line (manifold) attractor networks holds that they represent processed information by the identity, and/or location (on a null-stable manifold), of the attractor state to which they converge [1,2]. Subsequently, a readout mechanism (which we call attractor-based readout) performs decoding by identifying the converged state of the network. Although this method has been successfully applied to a variety of tasks, including orientation estimation, cue integration and decision making, there is little evidence for attractor states in cortical networks. Neurons in sensory cortical areas rarely exhibit persistent activity in natural environments, and the firing rates of most apparently persistently-active neurons in prefrontal cortical areas also change systematically over time.

We have recently suggested a new computational view of attractor networks, which involves reading information from the early portion of the trajectory of their states as they evolve towards their attractors. This change-based readout makes decisions based on the way a statistic of the state of the network changes over time [3]. We showed that this method can perform nearly as well as an ideal observer in a model visual hyperacuity task, and demonstrated its additional computational benefits such as the possibility of automatic invariance to certain irrelevant input dimensions.

Here we provide theoretical and empirical evidence that links change-based and attractor-based inference. We design a network that performs two tasks: a discrimination task (solved by change-based readout) that involves deciding if a low contrast target bar is to the left or right of a high contrast carrier bar, and an estimation task (solved by attractor-based readout) that involves determining the location of the carrier in the absence of the target. Both tasks and the network are designed such that linearization is an excellent approximation. We first show that a necessary condition for the network to be near optimal for discrimination via change-based readout is for attractor-based readout to be sub-optimal for estimation and, through the linearization, show how the network performs the two tasks. Then, we show that although the network is sub-optimal at both tasks, it is nevertheless near optimal in both cases.