Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T., Funke, C., Ecker, A., Gatys, L., Wichmann, F., & Bethge, M. (2017). A parametric texture model based on deep convolutional features closely matches texture appearance for humans. Journal of Vision, 17(12): 5, pp. 1-29. doi:10.1167/17.12.5.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Zeitschriftenartikel

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Wallis, TSA, Autor
Funke, CM, Autor
Ecker, AS, Autor           
Gatys, LA, Autor
Wichmann, FA, Autor           
Bethge, M1, 2, Autor           
Affiliations:
1Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
2Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Our visual environment is full of texture—“stuff” like cloth, bark, or gravel as distinct from “things” like dresses, trees, or paths—and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery (“parafoveal”) and when observers were able to make eye movements to all three patches (“inspection”). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2017-10
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1167/17.12.5
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Journal of Vision
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Charlottesville, VA : Scholar One, Inc.
Seiten: - Band / Heft: 17 (12) Artikelnummer: 5 Start- / Endseite: 1 - 29 Identifikator: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050