Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Critical Neuronal Models with Relaxed Timescale Separation


Levina,  A
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Das, A., & Levina, A. (2019). Critical Neuronal Models with Relaxed Timescale Separation. Physical Review X, 9: 021062, pp. 1-11. doi:10.1103/PhysRevX.9.021062.

Cite as: https://hdl.handle.net/21.11116/0000-0004-A9C8-C
Power laws in nature are considered to be signatures of complexity. The theory of self-organized criticality (SOC) was proposed to explain their origins. A long-standing principle of SOC is the separation of timescales axiom. It dictates that external input is delivered to the system at a much slower rate compared to the timescale of internal dynamics. The statistics of neural avalanches in the brain was demonstrated to follow a power law, indicating closeness to a critical state. Moreover, criticality was shown to be a beneficial state for various computations leading to the hypothesis that the brain is a SOC system. However, for neuronal systems that are constantly bombarded by incoming signals, the separation of timescales assumption is unnatural. Recently, it was experimentally demonstrated that a proper correction of the avalanche detection algorithm to account for the increased drive during task performance leads to a change of the power-law exponent from 1.5 to approximately 1.3, but there is so far no theoretical explanation for this change. Here, we investigate the importance of timescale separation, by partly abandoning it in various models. We achieve it by allowing for an external input during the avalanche, without compromising the separation of avalanches. We develop an analytic treatment and provide numerical simulations of a simple neuronal model. If the input strength scales as one over the network size, we call it a moderate input regime. In this regime, a scale-free behavior is observed; i.e., the avalanche size follows a 1.25 power law, independent of the exact size of the input. For a perfectly timescale separated system, an exponent of 1.5 is observed. Thus, the universality class of the system is changed by the external input, and the change of the exponent is in good agreement with experimental observations from nonhuman primates. We confirm our analytical findings by simulations of the more realistic branching network model.