日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

The topology of E/I recurrent networks regulates their ability to learn the dynamics of chaotic attractors

MPS-Authors
/persons/resource/persons263811

Giannakakis,  E
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons269520

Khajehabdollahi,  S
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons263156

Buendia,  V       
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons173580

Levina,  A       
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Giannakakis, E., Khajehabdollahi, S., Buendia, V., & Levina, A. (2022). The topology of E/I recurrent networks regulates their ability to learn the dynamics of chaotic attractors. Poster presented at Bernstein Conference 2022, Berlin, Germany.


引用: https://hdl.handle.net/21.11116/0000-000B-597F-4
要旨



Most theoretical studies of the computational capabilities of balanced, recurrent E/I networks assume a random uniform connectivity between a network's neurons. The dynamics of such networks have been extensively modeled [1] and their computational capabilities have been repeatedly demonstrated. Still, biological networks rarely exhibit uniform connectivity; instead, they are known to form complex network topologies, with each neuron type following different connectivity patterns. Furthermore, these topologies have been associated with distinct dynamics [2] and the ability to perform various computations [3] .



Here, we examine how various network topologies affect a network's ability to learn complex relationships. Particularly, we investigate an Echo State Network (ESN) that learns to predict a chaotic dynamical system. We create a 2-layered neural network of Wilson-Cowan units driven by an input to a recurrent E/I layer which, in turn, outputs via feedforward connections to a readout population. This readout population is interpreted by a linear trainable layer which aims to predict the future development of the time series given as input. After training, the predictions of the linear layer are fed as input to the system, creating a closed-loop system that reproduces the behavior of the original dynamical system [4]. Having this network set-up, we examine how different E/I connectivity structures affect the quality of the learned dynamics. At first, we examine the impact of connectivity ranges for excitatory and inhibitory neurons when the network learns a chaotic Lorenz attractor. Our findings consistently show that broader inhibitory connectivity in both the recurrent and feedforward connections, combined with narrow excitatory connectivity leads to optimal performance, confirming a pattern observed in cortical networks [3]. We further examine whether separating the network into specialized E/I assemblies can allow the simultaneous learning of multiple attractors. Finally, we study the ability of different biologically-inspired plasticity mechanisms to optimize the network's connectivity and create near-optimal topologies in an unsupervised manner. In summary, our findings indicate that the topologies of recurrent networks may have a strong impact on its ability to reproduce complex chaotic dynamics.