English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  The topology of E/I recurrent networks regulates their ability to learn the dynamics of chaotic attractors

Giannakakis, E., Khajehabdollahi, S., Buendia, V., & Levina, A. (2022). The topology of E/I recurrent networks regulates their ability to learn the dynamics of chaotic attractors. Poster presented at Bernstein Conference 2022, Berlin, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Giannakakis, E1, Author           
Khajehabdollahi, S2, Author           
Buendia, V2, Author                 
Levina, A1, Author                 
Affiliations:
1Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3505519              
2Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3017468              

Content

show
hide
Free keywords: -
 Abstract:


Most theoretical studies of the computational capabilities of balanced, recurrent E/I networks assume a random uniform connectivity between a network's neurons. The dynamics of such networks have been extensively modeled [1] and their computational capabilities have been repeatedly demonstrated. Still, biological networks rarely exhibit uniform connectivity; instead, they are known to form complex network topologies, with each neuron type following different connectivity patterns. Furthermore, these topologies have been associated with distinct dynamics [2] and the ability to perform various computations [3] .



Here, we examine how various network topologies affect a network's ability to learn complex relationships. Particularly, we investigate an Echo State Network (ESN) that learns to predict a chaotic dynamical system. We create a 2-layered neural network of Wilson-Cowan units driven by an input to a recurrent E/I layer which, in turn, outputs via feedforward connections to a readout population. This readout population is interpreted by a linear trainable layer which aims to predict the future development of the time series given as input. After training, the predictions of the linear layer are fed as input to the system, creating a closed-loop system that reproduces the behavior of the original dynamical system [4]. Having this network set-up, we examine how different E/I connectivity structures affect the quality of the learned dynamics. At first, we examine the impact of connectivity ranges for excitatory and inhibitory neurons when the network learns a chaotic Lorenz attractor. Our findings consistently show that broader inhibitory connectivity in both the recurrent and feedforward connections, combined with narrow excitatory connectivity leads to optimal performance, confirming a pattern observed in cortical networks [3]. We further examine whether separating the network into specialized E/I assemblies can allow the simultaneous learning of multiple attractors. Finally, we study the ability of different biologically-inspired plasticity mechanisms to optimize the network's connectivity and create near-optimal topologies in an unsupervised manner. In summary, our findings indicate that the topologies of recurrent networks may have a strong impact on its ability to reproduce complex chaotic dynamics.

Details

show
hide
Language(s):
 Dates: 2022-09
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Bernstein Conference 2022
Place of Event: Berlin, Germany
Start-/End Date: 2022-09-13 - 2022-09-16

Legal Case

show

Project information

show

Source 1

show
hide
Title: Bernstein Conference 2022
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: PIV 7 Start / End Page: - Identifier: -