English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Long timescales needed for memory tasks arise from distinct mecha- nisms shaped by learning curricula

Zeraati, R., Khajehabdollahi, S., Giannakakis, E., Schafer, T., Martius, G., & Levina, A. (2024). Long timescales needed for memory tasks arise from distinct mecha- nisms shaped by learning curricula. In Computational and Systems Neuroscience Meeting (COSYNE 2024) (pp. 47-48).

Item is

Basic

show hide
Genre: Meeting Abstract

Files

show Files

Creators

show
hide
 Creators:
Zeraati, R1, Author                 
Khajehabdollahi, S, Author                 
Giannakakis, E, Author                 
Schafer, T1, Author                 
Martius, G, Author
Levina, A, Author                 
Affiliations:
1Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3505519              

Content

show
hide
Free keywords: -
 Abstract: The brain solves complex tasks with intricate temporal dependencies by maintaining the memory of previous inputs over long periods. Long timescales required for solving such tasks may arise from the biophysical properties of individual neurons (single-neuron timescale, e.g., membrane time constant) or recurrent interactions among them. While both mechanisms operate in brain networks, their interplay and individual contributions to optimally solving memory-dependent tasks remain poorly understood. We investigate the role of different mechanisms by training recurrent neural networks (RNNs) to solve N-parity and N-delayed match-to-sample (N-DMS) tasks with increasing memory requirements controlled by N. Networks are trained using two distinct curricula with gradually increasing N: (i) in single-N curriculum, networks learn a new N at each curriculum step; (ii) in multi-N curriculum, they learn a new N while maintaining the solutions for previous Ns, similar to biological learning. Each neuron has a leak parameter indicating the single-neuron timescale, optimized alongside recurrent weights. We estimate the network-mediated timescales from the autocorrelation decay of each neuron’s activity. We find that in both curricula, RNNs develop longer timescales with increasing N, but via distinct mechanisms. Single-N RNNs operate in a strong inhibitory state and mainly rely on increasing their single- neuron timescales with N. However, multi-N RNNs operate closer to a balanced state and use only recurrent connectivity to develop long timescales, while keeping their single-neuron timescales constant. The latter is compatible with findings in primate cortex. We show that using network-mediated mechanisms to develop long timescales, as in multi-N RNNs, increases training speed and stability to perturbations, and allows generalization to tasks beyond the training set. Our results suggest that adapting timescales to task requirements via recurrent connectivity enables learning more complex objectives (holding multiple concurrent memories) and improves computational robustness, which can be a beneficial strategy for implementing brain computations.

Details

show
hide
Language(s):
 Dates: 2024-03
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Computational and Systems Neuroscience Meeting (COSYNE 2024)
Place of Event: Lisboa, Portugal
Start-/End Date: 2024-02-29 - 2024-03-05

Legal Case

show

Project information

show

Source 1

show
hide
Title: Computational and Systems Neuroscience Meeting (COSYNE 2024)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: T-35 Start / End Page: 47 - 48 Identifier: -