English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

A goal-driven deep learning approach for V1 system identification

MPS-Authors
There are no MPG-Authors available
Locator

Link
(Any fulltext)

Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Cadena, S., Ecker, A., Denfield, G., Walker, E., Tolias, A., & Bethge, M. (2017). A goal-driven deep learning approach for V1 system identification. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2017), Salt Lake City, UT, USA.


Cite as: http://hdl.handle.net/21.11116/0000-0000-C509-8
Abstract
Understanding sensory processing in the visual system results from accurate predictions of its neural responses to arbitrary stimuli. Despite great efforts over the last decades, we still lack a full characterization of the computations in primary visual cortex (V1) and their role in higher cognitive functional tasks (e.g. object recognition). Recent goal-driven deep learning models have provided unprecedented predictive performance on the visual ventral stream and revealed a hierarchical correspondence. However, we still have to assess if their learned representations can also be used to predict single cell responses in V1. Here, we leverage these learned representations to build a model that predicts responses to natural images across layers of monkey V1. We use the internal representations of a high-performing convolutional neural network (CNN) trained on object recognition as a non-linear feature space for a Generalized Linear Model. We found that intermediate early layers in the CNN provided the best predictive performance on held out data. Our model significantly outperformed classical and current state-of-the-art methods on V1 identification. When exploring the properties of the best predictive layers in the CNN, we found striking similarities with known V1 computation. Our model is not only interpretable, but also interpolates between recent subunit-based hierarchical models and goal-driven deep learning models, leading to results that argue in favor of shared representations in the brain.