hide
Free keywords:
-
Abstract:
Theoretical and experimental investigations of different neuronal systems suggest that operating close to a critical state can be beneficial for information processing. Particularly, the dynamic range (range of inputs an ideal observer can decode from the output of the system) was shown to be maximized at the phase transition [1]. In the probabilistic recurrent network traditionally used to model neuronal systems tunable towards and away from the critical state, all interactions are excitatory. In this case, there is only one phase transition: between the no activity case and ceaseless activity. Interestingly, when the inhibitory interactions are added to the network [2], there are two transition points [3]: from no activity to the sustained finite activity, and from finite activity to the full system activation. We discover that both transitions are associated with locally increased dynamic range. Although the second transition results in the overall largest dynamic range.
There is, however, a caveat in the current dynamic range definition for recurrent networks: it utilizes the mean response curve, and thus requires an infinite observation time. However, everyday decisions typically involve very short time intervals. Here we constrain the time the ideal observer is monitoring the output of the network. In this case, the noise corrupts the response, resulting in a distribution of mean outputs P(o|s∗). This makes it impossible to reconstruct the presented input s∗ with 100% certainty. Instead, we suggest that the input signals s1 and s2 can be discriminated if the minimal discriminator error between them is smaller than ε. For a given observation time, we can then determine the stimuli that can be reliably discriminated by the network. We call the interval of these stimuli the finite observation dynamic range. As the observation time goes to infinity, the classical dynamic range definition is recovered.
We demonstrate that the finite observation dynamic range is not maximized for the critical excitatory network. Moreover, depending on the length of observation time, differently tuned systems become optimal: the shorter the time the more subcritical the optimal system becomes. Our results predict a diversity of subcritical tunings (with different timescales) in cortical networks, depending on the required reaction time. This diversity of timescales is in line with the reported hierarchy of timescales across the brain.