English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Flexible Bayesian inference for complex models of single neurons

MPS-Authors

Goncalves,  P
Center of Advanced European Studies and Research (caesar), Max Planck Society;

Luckmann,  J-M
Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons192667

Bassetto,  G
Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons192652

Nonnenmacher,  M
Center of Advanced European Studies and Research (caesar), Max Planck Society;

/persons/resource/persons84066

Macke,  J
Former Research Group Neural Computation and Behaviour, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Center of Advanced European Studies and Research (caesar), Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Goncalves, P., Luckmann, J.-M., Bassetto, G., Nonnenmacher, M., & Macke, J. (2017). Flexible Bayesian inference for complex models of single neurons. BMC Neuroscience, 18(Supplement 1): O4, 58.


Cite as: https://hdl.handle.net/21.11116/0000-0000-C56C-9
Abstract
Characterizing the input-output transformations of single neurons is critical for understanding neural computation.
Single-neuron models have been extensively studied, ranging from simple phenomenological models to complex
multi-compartment neurons. However, linking mechanistic models of single-neurons to empirical observations of
neural activity has been challenging. Statistical inference is only possible for a few neuron models (e.g. GLMs),
and no generally applicable, effective statistical inference algorithms are available: As a consequence, comparisons
between models and data are either qualitative or rely on manual parameter tweaking, parameter-fitting using heuristics or brute-force search [1]. Furthermore, parameter-fitting approaches typically return a single
best-fitting estimate, but do not characterize the entire space of models that would be consistent with data (the
posterior distribution). We overcome this limitation by presenting a general method to infer the posterior distribution over model parameters given observed data on complex single-neuron models. Our approach can be applied in a ‘black box’ manner to a wide range of single-neuron models without requiring model-specific modifications. In particular, it extends to models without explicit likelihoods (e.g. most single-neuron models). We achieve this goal by building on recent advances in likelihood-free Bayesian inference [2]: the key idea is to simulate multiple data-sets from different parameters, and then to train a probabilistic neural network which approximates the mapping from data to posterior distribution. We illustrate this approach using single- and multi-compartment models of single neurons: On simulated data, estimated posterior distributions recover ground-truth parameters, and reveal the manifold of parameters for which the model exhibits the same behaviour. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, and voltage traces accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design modelspecific algorithms, closing the gap between biophysical and statistical approaches to single-neuron modelling.