Deutsch
 
Benutzerhandbuch Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Amortized inference for fast spike prediction from calcium imaging data

MPG-Autoren

Speiser,  A
Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons84066

Macke,  J
Center of Advanced European Studies and Research (caesar), Max Planck Society;

Externe Ressourcen

Link
(beliebiger Volltext)

Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Speiser, A., Turaga, S., Archer, E., & Macke, J. (2017). Amortized inference for fast spike prediction from calcium imaging data. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2017), Salt Lake City, UT, USA.


Zitierlink: http://hdl.handle.net/21.11116/0000-0000-C501-0
Zusammenfassung
Calcium imaging allows neuronal activity measurements from large populations of spatially identified neurons invivo. However, spike inference algorithms are needed to infer spike times from fluorescence measurements of calcium concentration. Bayesian model inversion can be used to infer spikes, using carefully designed generative models that describe how spiking activity in a neuron influences measured fluorescence. Model inversion typically requires either computationally expensive MCMC sampling methods, or faster but approximate maximuma-posteriori estimation. We present a method for efficiently inverting generative models for spike inference. Our method is several orders of magnitude faster than existing approaches, allowing for generative-model based spike inference in real-time for large-scale population neural imaging, and can be applied to a wide range of linear and nonlinear generative models. We use recent advances in black-box variational inference (BBVI, Ranganath 2014) and ‘amortize’ inference by learning a deep network based recognition-model for fast model inversion (Mnih 2016). At training time, we simultaneously optimize the parameters of the generative model as well as the weights of a deep neural network which predicts the posterior approximation. At test time, performing inference for a given trace amounts to a fast single forward pass through the network at constant computational cost, and without the need for iterative optimization or MCMC sampling. On simple synthetic datasets, we show that our method is just as accurate as existing methods. However, the BBVI approach works with a wide range of generative models in a black-box manner as long as they are differentiable. In particular, we show that using a nonlinear generative model is better suited to describe GCaMP6 data (Chen 2013), leading to improved performance on real data. The framework can also easily be extended to combine supervised and unsupervised objectives enabling semi-supervised learning of spike inference.