Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Variational Autoencoder account of the early visual hierarchy

Orban, G., Banyai, M., & Nagy, D. (2020). Variational Autoencoder account of the early visual hierarchy. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2020), Denver, CO, USA.

Item is

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
http://cosyne.org/cosyne20/Cosyne2020_program_book.pdf (Zusammenfassung)
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Orban, G, Autor
Banyai, M1, Autor           
Nagy, D, Autor
Affiliations:
1External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Visual perception is the process of making inferences about latent variables in the environment based on observa-tions. There is strong neural and behavioral support that the cortical implementation of this process is hierarchicaland also represents uncertainty about the latents. Thus, given the success of supervised feed-forward neuralnetworks in predicting mean neural responses, nonlinear probabilistic models of complex images, featuring hi-erarchically organized latents, are well-motivated to predict not only the mean but also higher-order statistics ofneural responses. We fit hierarchical variational autoencoders, which inherently utilize both feed-forward and recurrent connections, to image data in an unsupervised manner to make predictions about the representationof textures in early visual cortical areas. We demonstrate that consecutive latent layers learn increasingly com-pressed representations of stimuli, while also producing a disentangled representation of contrast. The proposedcomputational framework accounts for two distinct experimental observations in macaques. first, the linear de-codability of the identity of texture stimuli has been found to be higher in V1 than in V2, while the reverse is true ifthe texture family of the stimulus is to be decoded. We show that our model has the same property, indicating thattraining discards local feature information from the second layer of latents present in the first, while retaining in-formation about global stimulus statistics. Second, noise correlations in V1, in addition to being stimulus-specific,have been measured to vary according to the higher-order statistical content of the stimulus. We demonstrate thatin our model, noise correlations in the first layer of latents show greater variance between stimuli from differenttexture families due to the effect of the top-down contextual prior from the second layer. These results show thathierarchical Bayesian models naturally extend feed-forward models to the probabilistic, unsupervised domain andaccount for a range of anatomical and electrophysiological observations.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2020-03
 Publikationsstatus: Online veröffentlicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Computational and Systems Neuroscience Meeting (COSYNE 2020)
Veranstaltungsort: Denver, CO, USA
Start-/Enddatum: 2020-02-27 - 2020-03-01

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Computational and Systems Neuroscience Meeting (COSYNE 2020)
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: II-75 Start- / Endseite: 165 - 166 Identifikator: -