English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  A rotation-equivariant convolutional neural network model of primary visual cortex

Ecker, A., Sinz, F., Froudarakis, E., Fahey, P., Cadena, S., Walker, E., et al. (submitted). A rotation-equivariant convolutional neural network model of primary visual cortex.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0002-4EC6-8 Version Permalink: http://hdl.handle.net/21.11116/0000-0002-F82E-4
Genre: Conference Paper

Files

show Files

Locators

show
hide
Locator:
https://arxiv.org/abs/1809.10504 (Any fulltext)
Description:
-

Creators

show
hide
 Creators:
Ecker, A, Author              
Sinz, FH, Author              
Froudarakis, E, Author
Fahey, PG, Author
Cadena, SA, Author
Walker, EY, Author
Cobos, E, Author
Reimer, J, Author
Tolias, AS, Author              
Bethge, M1, 2, Author              
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1.

Details

show
hide
Language(s):
 Dates: 2018-09
 Publication Status: Submitted
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Seventh International Conference on Learning Representations (ICLR 2019)
Place of Event: New Orleans, LA, USA
Start-/End Date: 2019-05-06 - 2019-05-09

Legal Case

show

Project information

show

Source 1

show
hide
Title: Seventh International Conference on Learning Representations (ICLR 2019)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -