English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Robust estimation for neural state-space models

MPS-Authors
/persons/resource/persons84066

Macke,  J
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Buesing, L., Macke, J., & Sahani, M. (2013). Robust estimation for neural state-space models. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2013), Salt Lake City, UT, USA.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-B4E4-6
Abstract
Neurons within cortical populations are tightly coupled into collective dynamical systems that code and compute cooperatively. This dynamical coupling leads to firing variability which is structured across both neurons and time, and which can be described by statistical models where a latent low-dimensional (‘state-space’) stochastic process represents the common effect of network activity on the recorded cells. However, such state-space models can be challenging to fit to experimental data. The discrete nature of spiking leads to non-linear and non- Gaussian models, necessitating approximations during model estimation that may be computationally intensive or error-prone. Furthermore, the likelihood function—the quality of fit as a function of model parameters—may have multiple maxima, making it difficult to find the overall best model amongst many locally-optimal ones. We present an algorithm which improves the efficiency and robustness of estimation for statistical models in which a latent stochastic linear dynamical system (LDS) drives generalised-linear repre- sentations of individual cells. Our algorithm is based on an engineering approach called subspace identification (SSID). SSID was developed to estimate LDS models of Gaussian variables and works by identifying low-dimensional structure in the matrix of covariances between anisochronic measurements. It yields a unique and statistically consistent estimate at relatively little cost. We have extended SSID to the generalised-linear setting. The extended SSID learns a good model of neural population activity. On large simulated data sets with Poisson spike-counts, the algorithm recovers the correct parameters rapidly, without iter- ation or approximation. On multi-electrode cortical recordings it provides an effective initialisation for conventional maximum-likelihood estimation, avoiding poor local optima and substantially speed- ing convergence. Thus the new approach promises to render state-space methods with non-Gaussian observations far more practicable.