English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Deep convolutional models improve predictions of macaque V1 responses to natural images

Cadena, S., Denfield, G., Walker, E., Gatys, L., Tolias, A., Bethge, M., et al. (2019). Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Computational Biology, 15(4), 1-27. doi:10.1371/journal.pcbi.1006897.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0003-7A0E-6 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-7A0F-5
Genre: Journal Article

Files

show Files

Creators

show
hide
 Creators:
Cadena, SA, Author
Denfield, GH, Author
Walker, EY, Author
Gatys, LA, Author
Tolias, AS, Author              
Bethge, M1, 2, Author              
Ecker, AS, Author              
Affiliations:
1Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
2Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              

Content

show
hide
Free keywords: -
 Abstract: Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.

Details

show
hide
Language(s):
 Dates: 2019-04
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1371/journal.pcbi.1006897
eDoc: e1006897
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: PLoS Computational Biology
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: San Francisco, CA : Public Library of Science
Pages: - Volume / Issue: 15 (4) Sequence Number: - Start / End Page: 1 - 27 Identifier: ISSN: 1553-734X
CoNE: https://pure.mpg.de/cone/journals/resource/1000000000017180_1