English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Visual illusions and feedforward and feedback processes in visual reccognition

MPS-Authors
/persons/resource/persons226321

Zhaoping,  L
Department of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Zhaoping, L. (2020). Visual illusions and feedforward and feedback processes in visual reccognition. Poster presented at Bernstein Conference 2020. doi:10.12751/nncn.bc2020.0293.


Cite as: https://hdl.handle.net/21.11116/0000-0007-0BDF-3
Abstract
Feedback neural connections are abundant from higher to lower visual cortical areas. Although they are manifested in visual recognition behavior (e.g., Tang et al 2018) and even exploited in artificial convolutional neural networks (Cao et al , 2015) which dominantly use feedforward processes only, the recurrent nature of the processing makes it difficult to understand how they work together with the feedforward visual processing. Motivated by Zhaoping’s proposal to study the feedback in the context of the attentional bottleneck and her hypothesized central-peripheral dichotomy (Zhaoping 2019) that the top-down feedback to aid visual recognition, using the computation of analysis-by-synthesis, is stronger in central than peripheral visual field, and taking advantage of our better knowledge about the feedforward signals from the primary visual cortex (V1), we investigate this topic combining computational and visual psychophysical means. Computationally, we illustrate how the same activities of the V1 neurons tuned to, e.g., orientation, motion direction, or stereo depth, could arise from very different kinds of visual inputs (as observed physiologically, e.g., Cummings and Parker 1997, Kuriki et al 2008). These different kinds of inputs that evoke the same V1 responses could therefore cause visual perceptual confounds or visual illusions. Some visual illusions (e.g., the reversed phi motion) in the literature can indeed be understood accordingly, as have also been noted by previous researchers. Since top-down feedback can combine internal knowledge about the visual world and feedforward inputs to disambiguate between the visual confounds, we ask whether the central-peripheral dichotomy could explain when V1 activities do or do not give rise to visual percepts or illusions, how the strengths of these percepts depend on whether the visual inputs are viewed in the central and peripheral visual field and on whether the visual inputs are presented long enough or too briefly to make the feedback more or less effective. To this end, we explore what new illusions can be predicted by the central-peripheral dichotomy and by our knowledge about V1. We will present visual psychophysical tests of some of the predictions, and relate our findings to previous works such as those in feature finding, visual masking, and adversarial attacks in artificial neural networks.