English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

A goal-driven deep learning approach for V1 system identification

MPS-Authors
There are no MPG-Authors in the publication available
External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Cadena, S., Ecker, A., Denfield, G., Walker, E., Tolias, A., & Bethge, M. (2016). A goal-driven deep learning approach for V1 system identification. Poster presented at Bernstein Conference 2016, Berlin, Germany.


Cite as: https://hdl.handle.net/21.11116/0000-0000-7B06-0
Abstract
Understanding sensory processing in the visual system results from accurate predictions of its neural responses to any kind of stimulus. Although great effort has been devoted to the task, we still lack a full characterization of primary visual cortex (V1) computations and their role in higher cognitive functional tasks (e.g. object recognition) in response to naturalistic stimuli. While previous goal-driven deep learning models have provided unprecedented performance on visual ventral stream predictions and revealed hierarchical correspondence, no study has used the representations learned by those models to predict single cell spike counts in V1. We introduce a novel model (Fig. 1A) that leverages these learned representations to build a linearized model with Poisson noise. We separately use the representations of each convolutional layer of a near-state of the art convolutional neural network (CNN) trained on object recognition to fit a model that predicts V1 responses to naturalistic stimuli. When fitted to data collected from neurons across cortical layers in V1 from an awake, fixating monkey, we found that, as we expected, intermediate early layers in the CNN provided better performance on held out data (Fig. 1B). Additionally we show that, using the best predictive layers, our model significantly outperforms classical and current state-of-the-art methods on V1 identification (Fig. 1C). When exploring the properties of the best predictive layers in the CNN, we found striking similarities with known V1 computation. Our model is not only interpretable, but also interpolates between recent subunit-based hierarchical models and goal-driven deep learning models leading to results that argue in favor of shared representations in the brain.