English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

From V1SH to CPD: feedforward, feedback, and the attentional bottleneck in vision

MPS-Authors
/persons/resource/persons226321

Zhaoping,  L       
Department of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Zhaoping, L. (2021). From V1SH to CPD: feedforward, feedback, and the attentional bottleneck in vision. Talk presented at Frédéric Joliot Institute for Life Sciences. Gif-Sur-Yvette, France. 2021-06-28.


Cite as: https://hdl.handle.net/21.11116/0000-000C-04A1-9
Abstract
V1SH is the V1 Saliency Hypothesis, and CPD is the Central-Peripheral Dichotomy.
I will explain how they motivate a new framework: Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map (V1SH) to guide the fovea to selected visual locations via gaze shifts.
This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region (CPD). Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions. I will show some illusions arising from V1's feedforward inputs limited by the attentional bottleneck, and use random-dot stereograms to illustrate how top-down feedback constructively utilizes the feedforward inputs in some visual inferences and vetoes feedforward inputs in other cases, depending on the nature of the feedforward inputs.