English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Flexible statistical inference for mechanistic models of neural dynamics

MPS-Authors
/persons/resource/persons192805

Lueckmann,  J-M
Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons258819

Gonçalves,  Pedro J.
Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons192667

Bassetto,  Giacomo
Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Max Planck Society;

Oecal,  Kaan
Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons192652

Nonnenmacher,  Marcel
Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons84066

Macke,  Jakob H.
Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Lückmann_Flexible.pdf
(Any fulltext), 7MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Lueckmann, J.-M., Gonçalves, P. J., Bassetto, G., Oecal, K., Nonnenmacher, M., & Macke, J. H. (2018). Flexible statistical inference for mechanistic models of neural dynamics. In Advances in Neural Information Processing Systems 30 (NIPS 2017).


Cite as: https://hdl.handle.net/21.11116/0000-0000-2569-1
Abstract
Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling.