日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T., Funke, C., Ecker, A., Gatys, L., Wichmann, F., & Bethge, M. (2017). A parametric texture model based on deep convolutional features closely matches texture appearance for humans. Poster presented at 17th Annual Meeting of the Vision Sciences Society (VSS 2017), St. Pete Beach, FL, USA.

Item is

基本情報

表示: 非表示:
アイテムのパーマリンク: https://hdl.handle.net/21.11116/0000-0000-C403-F 版のパーマリンク: https://hdl.handle.net/21.11116/0000-0006-B4BD-A
資料種別: ポスター

ファイル

表示: ファイル

関連URL

表示:
非表示:
URL:
Link (全文テキスト(全般))
説明:
-
OA-Status:

作成者

表示:
非表示:
 作成者:
Wallis, TSA, 著者
Funke, CM, 著者
Ecker, AS, 著者           
Gatys, LA, 著者
Wichmann, FA, 著者           
Bethge, M1, 著者           
所属:
1External Organizations, ou_persistent22              

内容説明

表示:
非表示:
キーワード: -
 要旨: Much of our visual environment consists of texture—“stuff” like cloth, bark or gravel as distinct from “things” like dresses, trees or paths—and we humans are adept at perceiving textures and their subtle variation. How does our visual system achieve this feat? Here we psychophysically evaluate a new parameteric model of texture appearance (the CNN texture model; Gatys et al., 2015) that is based on the features encoded by a deep
convolutional neural network (deep CNN) trained to recognise objects in images (the VGG-19; Simonyan and Zisserman, 2015). By cumulatively matching the correlations of deep features up to a given layer (using up to five convolutional layers) we were able to evaluate models of increasing complexity. We used a three-alternative spatial oddity task to test whether model-generated textures could be discriminated from original natural textures under two viewing conditions: when test patches were briefly presented to the parafovea (“single fixation”) and when observers were able to make eye movements to all three patches (“inspection”). For 9 of the 12 source textures we tested, the models using more than three layers produced images that were indiscriminable from the originals even
under foveal inspection. The venerable parameteric texture model of Portilla and Simoncelli (Portilla and Simoncelli, 2000) was also able to match the appearance of these textures in the single fixation condition, but not under inspection. Of the three source textures our model could not match, two contain strong periodicities. In a second experiment, we found that matching the power spectrum in addition to the deep features used above (Liu et al., 2016) greatly improved matches for these two textures. These
results suggest that the features learned by deep CNNs encode statistical regularities of natural scenes that capture important aspects of material perception in humans.

資料詳細

表示:
非表示:
言語:
 日付: 2017-10
 出版の状態: 出版
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): DOI: 10.1167/17.10.1081
BibTex参照ID: FunkeWEGWB2017
 学位: -

関連イベント

表示:
非表示:
イベント名: 17th Annual Meeting of the Vision Sciences Society (VSS 2017)
開催地: St. Pete Beach, FL, USA
開始日・終了日: 2017-05-19 - 2017-05-24

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: Journal of Vision
種別: 学術雑誌
 著者・編者:
所属:
出版社, 出版地: Charlottesville, VA : Scholar One, Inc.
ページ: - 巻号: 17 (10) 通巻号: - 開始・終了ページ: 1081 識別子(ISBN, ISSN, DOIなど): ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050