Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T., Funke, C., Ecker, A., Gatys, L., Wichmann, F., & Bethge, M. (2017). A parametric texture model based on deep convolutional features closely matches texture appearance for humans. Poster presented at 17th Annual Meeting of the Vision Sciences Society (VSS 2017), St. Pete Beach, FL, USA.

Item is

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
Link (beliebiger Volltext)
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Wallis, TSA, Autor
Funke, CM, Autor
Ecker, AS, Autor           
Gatys, LA, Autor
Wichmann, FA, Autor           
Bethge, M1, Autor           
Affiliations:
1External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Much of our visual environment consists of texture—“stuff” like cloth, bark or gravel as distinct from “things” like dresses, trees or paths—and we humans are adept at perceiving textures and their subtle variation. How does our visual system achieve this feat? Here we psychophysically evaluate a new parameteric model of texture appearance (the CNN texture model; Gatys et al., 2015) that is based on the features encoded by a deep
convolutional neural network (deep CNN) trained to recognise objects in images (the VGG-19; Simonyan and Zisserman, 2015). By cumulatively matching the correlations of deep features up to a given layer (using up to five convolutional layers) we were able to evaluate models of increasing complexity. We used a three-alternative spatial oddity task to test whether model-generated textures could be discriminated from original natural textures under two viewing conditions: when test patches were briefly presented to the parafovea (“single fixation”) and when observers were able to make eye movements to all three patches (“inspection”). For 9 of the 12 source textures we tested, the models using more than three layers produced images that were indiscriminable from the originals even
under foveal inspection. The venerable parameteric texture model of Portilla and Simoncelli (Portilla and Simoncelli, 2000) was also able to match the appearance of these textures in the single fixation condition, but not under inspection. Of the three source textures our model could not match, two contain strong periodicities. In a second experiment, we found that matching the power spectrum in addition to the deep features used above (Liu et al., 2016) greatly improved matches for these two textures. These
results suggest that the features learned by deep CNNs encode statistical regularities of natural scenes that capture important aspects of material perception in humans.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2017-10
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1167/17.10.1081
BibTex Citekey: FunkeWEGWB2017
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 17th Annual Meeting of the Vision Sciences Society (VSS 2017)
Veranstaltungsort: St. Pete Beach, FL, USA
Start-/Enddatum: 2017-05-19 - 2017-05-24

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Journal of Vision
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Charlottesville, VA : Scholar One, Inc.
Seiten: - Band / Heft: 17 (10) Artikelnummer: - Start- / Endseite: 1081 Identifikator: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050