English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

The Expressive Leaky Memory (ELM) neuron: a biologically inspired, computationally expressive, and efficient model of a cortical neuron

MPS-Authors
/persons/resource/persons173580

Levina,  A       
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Spieler, A., Rahaman, N., Martius, G., Schölkopf, B., & Levina, A. (2023). The Expressive Leaky Memory (ELM) neuron: a biologically inspired, computationally expressive, and efficient model of a cortical neuron. Poster presented at Bernstein Conference 2023, Berlin, Germany.


Cite as: https://hdl.handle.net/21.11116/0000-000D-D73B-F
Abstract
Traditional large-scale neuroscience models use greatly simplified models of individual neurons to capture the dynamics of neuronal populations, generating complex dynamics primarily through recurrent connections. Similarly, hugely successful deep learning models utilize massive numbers of highly simplified individual neurons. However, each biological cortical neuron is inherently a sophisticated computational device whose dynamics is determined by many biophysical processes over a broad range of timescales in a non-trivial manner. Recent attempts to capture a single cortical neuron's input-output relationship by a convolutional neural network resulted in a model with millions of parameters [2]. Nonetheless, we questioned the necessity for these many parameters and hypothesized a recurrent-cell architecture could potentially improve on this. Consequently, we developed the Expressive Leaky Memory (ELM) neuron: a biologically inspired, computationally expressive, yet efficient recurrent model of a cortical neuron [1]. Remarkably, a version of our ELM neuron requires merely thousands of trainable parameters (instead of millions) to match the aforementioned input-output relationship accurately. However, this necessitates multiple memory-like hidden states (instead of a single one) and highly nonlinear synaptic integration (instead of simple summation). In subsequent investigations, we quantify the impact of the individual model components on performance and show how coarser-grained processing of synaptic input, in analogy to neuronal branches, is crucial to increase computational efficiency. Having developed a simple yet expressive neuronal model architecture, we could check how such neurons could solve various tasks. We evaluated our model on a task requiring the addition of spike encoded digits, derived from the Spiking Heidelberg Digits dataset, and found a single ELM neuron can solve this complicated task given sufficiently long and diverse memory timescales [3]. Even more surprising, the ELM neuron can outperform many transformer-based models on the Pathfinder-X task, a commonly used task to determine the state-of-the-art models for long-range dependency prediction [4]. As the next steps, it would be interesting to investigate whether neural networks could benefit from a more powerful single-neuron model.