English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  The Expressive Leaky Memory (ELM) neuron: a biologically inspired, computationally expressive, and efficient model of a cortical neuron

Spieler, A., Rahaman, N., Martius, G., Schölkopf, B., & Levina, A. (2023). The Expressive Leaky Memory (ELM) neuron: a biologically inspired, computationally expressive, and efficient model of a cortical neuron. Poster presented at Bernstein Conference 2023, Berlin, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Spieler, A, Author
Rahaman, N, Author
Martius, G, Author
Schölkopf, B, Author                 
Levina, A1, Author                 
Affiliations:
1Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3505519              

Content

show
hide
Free keywords: -
 Abstract: Traditional large-scale neuroscience models use greatly simplified models of individual neurons to capture the dynamics of neuronal populations, generating complex dynamics primarily through recurrent connections. Similarly, hugely successful deep learning models utilize massive numbers of highly simplified individual neurons. However, each biological cortical neuron is inherently a sophisticated computational device whose dynamics is determined by many biophysical processes over a broad range of timescales in a non-trivial manner. Recent attempts to capture a single cortical neuron's input-output relationship by a convolutional neural network resulted in a model with millions of parameters [2]. Nonetheless, we questioned the necessity for these many parameters and hypothesized a recurrent-cell architecture could potentially improve on this. Consequently, we developed the Expressive Leaky Memory (ELM) neuron: a biologically inspired, computationally expressive, yet efficient recurrent model of a cortical neuron [1]. Remarkably, a version of our ELM neuron requires merely thousands of trainable parameters (instead of millions) to match the aforementioned input-output relationship accurately. However, this necessitates multiple memory-like hidden states (instead of a single one) and highly nonlinear synaptic integration (instead of simple summation). In subsequent investigations, we quantify the impact of the individual model components on performance and show how coarser-grained processing of synaptic input, in analogy to neuronal branches, is crucial to increase computational efficiency. Having developed a simple yet expressive neuronal model architecture, we could check how such neurons could solve various tasks. We evaluated our model on a task requiring the addition of spike encoded digits, derived from the Spiking Heidelberg Digits dataset, and found a single ELM neuron can solve this complicated task given sufficiently long and diverse memory timescales [3]. Even more surprising, the ELM neuron can outperform many transformer-based models on the Pathfinder-X task, a commonly used task to determine the state-of-the-art models for long-range dependency prediction [4]. As the next steps, it would be interesting to investigate whether neural networks could benefit from a more powerful single-neuron model.

Details

show
hide
Language(s):
 Dates: 2023-09
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Bernstein Conference 2023
Place of Event: Berlin, Germany
Start-/End Date: 2023-09-26 - 2023-09-29

Legal Case

show

Project information

show

Source 1

show
hide
Title: Bernstein Conference 2023
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: IV 107 Start / End Page: - Identifier: -