English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Preprint

SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks

MPS-Authors
/persons/resource/persons215422

Engelken,  Rainer
Department of Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Preprint
(Preprint), 728KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Engelken, R. (2023). SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks. arXiv. doi:10.48550/arXiv.2312.17216.


Cite as: https://hdl.handle.net/21.11116/0000-000F-675C-7
Abstract
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials. However, simulating and training SNNs is computationally expensive due to the need to solve large systems of coupled differential equations. In this paper, we introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs. Our algorithm reduces the computational cost of both the forward and backward pass operations from O(N) to O(log(N)) per network spike, thereby enabling numerically exact simulations of large spiking networks and their efficient training using backpropagation through time. By leveraging the sparsity of the network, SparseProp eliminates the need to iterate through all neurons at each spike, employing efficient state updates instead. We demonstrate the efficacy of SparseProp across several classical integrate-and-fire neuron models, including a simulation of a sparse SNN with one million LIF neurons. This results in a speed-up exceeding four orders of magnitude relative to previous event-based implementations. Our work provides an efficient and exact solution for training large-scale spiking neural networks and opens up new possibilities for building more sophisticated brain-inspired models.