Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks

Wallis, T., Funke, C., Ecker, A., Gatys, L., Wichmann, F., & Bethge, M. (2016). Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks. Perception, 45(ECVP Abstract Supplement), 175-176.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Meeting Abstract

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
Link (beliebiger Volltext)
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Wallis, TS, Autor
Funke, CM, Autor
Ecker, AS1, 2, 3, Autor           
Gatys, LA, Autor
Wichmann, FA2, 4, Autor           
Bethge, M1, 2, Autor           
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
3Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497798              
4Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497647              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Distortions of image structure can go unnoticed in the visual periphery, and objects can be harder to identify (crowding). Is it possible to create equivalence classes of images that discard and distort image structure but appear the same as the original images? Here we use deep convolutional neural networks (CNNs) to study peripheral representations that are texture-like, in that summary statistics within some pooling region are preserved but local position is lost. Building on our previous work generating textures by matching CNN responses, we first show that while CNN textures are difficult to discriminate from many natural textures, they fail to match the
appearance of scenes at a range of eccentricities and sizes. Because texturising scenes discards long range correlations over too large an area, we next generate images that match CNN features within overlapping pooling regions (see also Freeman and Simoncelli, 2011). These images are more difficult to discriminate from the original scenes, indicating that constraining features by their
neighbouring pooling regions provides greater perceptual fidelity. Our ultimate goal is to determine the minimal set of deep CNN features that produce metameric stimuli by varying the feature complexity and pooling regions used to represent the image.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2016-12
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1177/0301006616671273
BibTex Citekey: WallisFEGWB2016
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 39th European Conference on Visual Perception (ECVP 2016)
Veranstaltungsort: Barcelona, Spain
Start-/Enddatum: 2016-08-29 - 2016-09-01

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Perception
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: London : Pion Ltd.
Seiten: - Band / Heft: 45 (ECVP Abstract Supplement) Artikelnummer: - Start- / Endseite: 175 - 176 Identifikator: ISSN: 0301-0066
CoNE: https://pure.mpg.de/cone/journals/resource/954925509369