English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks

MPS-Authors
/persons/resource/persons83896

Ecker,  AS
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84314

Wichmann,  FA
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83805

Bethge,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Wallis, T., Funke, C., Ecker, A., Gatys, L., Wichmann, F., & Bethge, M. (2016). Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks. Perception, 45(ECVP Abstract Supplement), 175-176.


Cite as: https://hdl.handle.net/21.11116/0000-0000-7C7D-A
Abstract
Distortions of image structure can go unnoticed in the visual periphery, and objects can be harder to identify (crowding). Is it possible to create equivalence classes of images that discard and distort image structure but appear the same as the original images? Here we use deep convolutional neural networks (CNNs) to study peripheral representations that are texture-like, in that summary statistics within some pooling region are preserved but local position is lost. Building on our previous work generating textures by matching CNN responses, we first show that while CNN textures are difficult to discriminate from many natural textures, they fail to match the
appearance of scenes at a range of eccentricities and sizes. Because texturising scenes discards long range correlations over too large an area, we next generate images that match CNN features within overlapping pooling regions (see also Freeman and Simoncelli, 2011). These images are more difficult to discriminate from the original scenes, indicating that constraining features by their
neighbouring pooling regions provides greater perceptual fidelity. Our ultimate goal is to determine the minimal set of deep CNN features that produce metameric stimuli by varying the feature complexity and pooling regions used to represent the image.