English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Signatures of criticality observed in efficient coding networks

MPS-Authors
/persons/resource/persons84063

Logothetis,  NK
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Levina, A., Safavi, S., Logothetis, N., & Chalk, M. (2020). Signatures of criticality observed in efficient coding networks. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2020), Denver, CO, USA.


Cite as: http://hdl.handle.net/21.11116/0000-0005-EC12-D
Abstract
Over the last decades, multiple studies have reported signatures of criticality observed in various neuronal recordings. Moreover, theoretical investigations demonstrate that multiple aspects of information processing are optimized at the second-order phase transition. These studies motivated the hypothesis that the brain operates close to a critical state. To evaluate how distance from criticality may influence neural computations, researchers typically considered neural models that can attain various states (including critical and non-critical) depending on control parameters (e.g. connection-strength) and quantified how general information processing capabilities such as sensitivity to input, dynamic range, or information-transmission depend on these parameters. Certainly, being in a state with such optimized capabilities are relevant for the computations in the brain, but they are too abstract to provide a concrete implementation. For instance, all the mentioned capabilities are relevant for coding sensory information, but mere adjusting for the closeness to criticality cannot provide a neural implementation for coding given resource constraints. Whereas, frameworks like efficient coding both provide the objective to maximize and the implementation. Therefore, we introduce a novel complementary approach. We study a network that implements efficient coding and we investigate the presence of the scale-free neuronal avalanches in an optimized network. We consider a network of LIF neurons with synaptic transmission delays whose connectivity and dynamics are optimized for efficient coding. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. We consider networks with different noise amplitudes and evaluate how close they are to a critical state by measuring deviations from the nearest power-law of avalanche size distributions. Interestingly, only in the optimized network the distribution of avalanche sizes truly follow a power-law. This result has important implications, as it shows how two influential, and previously disparate fields - efficient coding, and criticality - might be intimately related.