English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Role of single-neuron and network-mediated timescales in recurrent neural networks solving long-memory tasks

Zeraati, R., Khajehabdollahi, S., Giannakakis, E., Schäfer, T., Martius, G., & Levina, A. (2023). Role of single-neuron and network-mediated timescales in recurrent neural networks solving long-memory tasks. In Bernstein Conference 2023.

Item is

Basic

show hide
Genre: Meeting Abstract

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Zeraati, R1, Author                 
Khajehabdollahi, S2, Author                 
Giannakakis, E1, Author                 
Schäfer, TJ1, Author           
Martius, G, Author
Levina, A1, Author                 
Affiliations:
1Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3505519              
2Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3017468              

Content

show
hide
Free keywords: -
 Abstract: The brain excels at solving complex tasks with intricate temporal dependencies by maintaining the memory of previous inputs across long periods. The long timescales required for solving such tasks may arise either from biophysical properties of individual neurons (single-neuron timescale, e.g., membrane time constant) or recurrent interactions among them (network-mediated timescale) [1,2]. Both mechanisms operate in brain networks, but their exact interactions and relation with computational requirements of specific tasks remain poorly understood. We investigate the role of timescales by training recurrent neural networks (RNNs) to solve N-parity and N-delayed match-to-sample (N-DMS) tasks with increasing memory requirements controlled by N (Fig a). ​​A binary sequence is given as input to the network. The network has to output 1 or 0 indicating either the binary sum of the last N digits (N-parity) or whether the digit presented at current time t matches the digit presented at time t-N+1 (N-DMS). Networks are trained using two distinct curricula with gradually increasing N: single-N or multi-N. In the single-N curriculum, networks learn a new N at each step of the curriculum, whereas in the multi-N curriculum, they learn a new N without forgetting the solution to previous Ns (Fig b), similar to biological learning. Each neuron has a leak parameter τ indicating the single-neuron timescale in the absence of network interactions. We optimize τ together with recurrent weights [3,4]. Then, we estimate the network-mediated timescale τn from the autocorrelation decay of each neuron's activity, while driving the network by uncorrelated binary inputs. We find that in both curricula, RNNs develop longer timescales with increasing N, but via distinct mechanisms (Fig c). Single-N RNNs operate in a strong inhibitory state and mainly rely on increasing their τ to adapt their timescales. However, multi-N RNNs operate closer to a balanced state and use only recurrent interactions to increase their τn, compatible with findings in primate cortex [5]. We show that adapting timescales via network-mediated mechanisms, as in multi-N RNNs, increases training speed, stability to perturbations and allows RNNs to generalize better to tasks beyond their training set (Fig d-f). Our results suggest that biologically-inspired learning dynamics give rise to network-mediated adaptation of timescales to task demands, improving RNNs’ computational robustness.

Details

show
hide
Language(s):
 Dates: 2023-09
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Bernstein Conference 2023
Place of Event: Berlin, Germany
Start-/End Date: 2023-09-26 - 2023-09-29

Legal Case

show

Project information

show

Source 1

show
hide
Title: Bernstein Conference 2023
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: CT 4 Start / End Page: - Identifier: -