English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Bayesian Neural System identification: error bars, receptive fields and neural couplings

Gerwinn, S., Seeger, M., Zeck, G., & Bethge, M. (2006). Bayesian Neural System identification: error bars, receptive fields and neural couplings. In 7th Conference of Tuebingen Junior Neuroscientists (NeNa 2006) (pp. 9).

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Gerwinn, S1, 2, Author           
Seeger, M1, 2, Author           
Zeck, G, Author
Bethge, M1, 2, Author           
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: The task of system identification lies at the heart of neural data analysis. Bayesian system identification methods provide a powerful toolbox which allows one to make inferences over stimulus-neuron and neuron-neuron dependencies in a principled way. Rather than reporting only
the most likely parameters, the posterior distribution obtained in the Bayesian approach informs us about the range of parameter values that are consistent with the observed data and the assumptions made. In other words, Bayesian receptive fields always come with error bars. Since the amount of data from neural recordings is limited, the error bars are as important as the receptive field itself.
Here we apply a recently developed approximation of Bayesian inference to a multi-cell response model consisting of a set of coupled units, each of which being a Linear-Nonlinear-Poisson (LNP) cascade neuron model. The instantaneous firing rate of each unit depends multiplicatively on both the spike train history of the units and the stimulus. Parameter fitting in this model has been shown to be a convex optimization problem (Paninski 2004) that can be solved efficiently, scaling linearly in the number of events, neurons and history-size. By doing inference in such a model one can estimate excitatory and inhibitory interactions between the neurons and the dependence of the stimulus. In addition, the Bayesian framework allows one not only to put error bars on the inferred parameter values but also to quantify the predictive power of the model in terms of the marginal likelihood.
As a sanity check of the new technique, and also to explore its limitations, we first verify for artificially generated data that we are able to infer the true underlying model. Then we apply the method to recordings from retinal ganglion cells (RGC) responding to white noise (m-sequence) stimulation. The figure shows both the inferred receptive fields (lower) as well as the confidence range of the sorted pixel values (upper) when using a different fraction of the data (0,10,50, and 100 ). We also compare the results with the receptive fields derived with classical linear correlation analysis and maximum likelihood estimation.

Details

show
hide
Language(s):
 Dates: 2006-11
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: GerwinnSZB2006
 Degree: -

Event

show
hide
Title: 7th Conference of Tuebingen Junior Neuroscientists (NeNa 2006)
Place of Event: Oberjoch, Germany
Start-/End Date: 2006-11-26 - 2006-11-28

Legal Case

show

Project information

show

Source 1

show
hide
Title: 7th Conference of Tuebingen Junior Neuroscientists (NeNa 2006)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 9 Identifier: -