English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks

Wallis, T., Funke, C., Ecker, A., Gatys, L., Wichmann, F., & Bethge, M. (2016). Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks. Perception, 45(ECVP Abstract Supplement), 175-176.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0000-7C7D-A Version Permalink: http://hdl.handle.net/21.11116/0000-0007-07C1-7
Genre: Meeting Abstract

Files

show Files

Locators

show
hide
Locator:
Link (Any fulltext)
Description:
-

Creators

show
hide
 Creators:
Wallis, TS, Author
Funke, CM, Author
Ecker, AS1, 2, 3, Author              
Gatys, LA, Author
Wichmann, FA2, 4, Author              
Bethge, M1, 2, Author              
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
3Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497798              
4Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497647              

Content

show
hide
Free keywords: -
 Abstract: Distortions of image structure can go unnoticed in the visual periphery, and objects can be harder to identify (crowding). Is it possible to create equivalence classes of images that discard and distort image structure but appear the same as the original images? Here we use deep convolutional neural networks (CNNs) to study peripheral representations that are texture-like, in that summary statistics within some pooling region are preserved but local position is lost. Building on our previous work generating textures by matching CNN responses, we first show that while CNN textures are difficult to discriminate from many natural textures, they fail to match the appearance of scenes at a range of eccentricities and sizes. Because texturising scenes discards long range correlations over too large an area, we next generate images that match CNN features within overlapping pooling regions (see also Freeman and Simoncelli, 2011). These images are more difficult to discriminate from the original scenes, indicating that constraining features by their neighbouring pooling regions provides greater perceptual fidelity. Our ultimate goal is to determine the minimal set of deep CNN features that produce metameric stimuli by varying the feature complexity and pooling regions used to represent the image.

Details

show
hide
Language(s):
 Dates: 2016-12
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1177/0301006616671273
BibTex Citekey: WallisFEGWB2016
 Degree: -

Event

show
hide
Title: 39th European Conference on Visual Perception (ECVP 2016)
Place of Event: Barcelona, Spain
Start-/End Date: 2016-08-29 - 2016-09-01

Legal Case

show

Project information

show

Source 1

show
hide
Title: Perception
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: London : Pion Ltd.
Pages: - Volume / Issue: 45 (ECVP Abstract Supplement) Sequence Number: - Start / End Page: 175 - 176 Identifier: ISSN: 0301-0066
CoNE: https://pure.mpg.de/cone/journals/resource/954925509369