Deutsch
 
Benutzerhandbuch Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  A rotation-equivariant convolutional neural network model of primary visual cortex

Ecker, A., Sinz, F., Froudarakis, E., Fahey, P., Cadena, S., Walker, E., et al. (submitted). A rotation-equivariant convolutional neural network model of primary visual cortex.

Item is

Basisdaten

einblenden: ausblenden:
Datensatz-Permalink: http://hdl.handle.net/21.11116/0000-0002-4EC6-8 Versions-Permalink: http://hdl.handle.net/21.11116/0000-0002-F82E-4
Genre: Konferenzbeitrag

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
https://arxiv.org/abs/1809.10504 (beliebiger Volltext)
Beschreibung:
-

Urheber

einblenden:
ausblenden:
 Urheber:
Ecker, A, Autor              
Sinz, FH, Autor              
Froudarakis, E, Autor
Fahey, PG, Autor
Cadena, SA, Autor
Walker, EY, Autor
Cobos, E, Autor
Reimer, J, Autor
Tolias, AS, Autor              
Bethge, M1, 2, Autor              
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2018-09
 Publikationsstatus: Eingereicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Seventh International Conference on Learning Representations (ICLR 2019)
Veranstaltungsort: New Orleans, LA, USA
Start-/Enddatum: 2019-05-06 - 2019-05-09

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Seventh International Conference on Learning Representations (ICLR 2019)
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: - Identifikator: -