English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Neural system identification for large populations separating "what" and "where"

MPS-Authors
/persons/resource/persons83805

Bethge,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator

Link
(Any fulltext)

Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Klindt, D., Ecker, A., Euler, T., & Bethge, M. (2018). Neural system identification for large populations separating "what" and "where". In I. Guyon, U. von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, et al. (Eds.), Advances in Neural Information Processing Systems 30 (pp. 3507-3517). Red Hook, NY, USA: Curran.


Cite as: http://hdl.handle.net/21.11116/0000-0000-C369-E
Abstract
Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional neural system identification methods do not capitalize on this separation of "what" and "where". Learning deep convolutional feature spaces shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron's response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations -- a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse pooling layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We explore this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms the current state-of-the art system identification model of mouse primary visual cortex on a publicly available dataset.