English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Peripheral vision in the central-peripheral dichotomy

MPS-Authors
/persons/resource/persons226321

Zhaoping,  L
Department of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Zhaoping, L. (2022). Peripheral vision in the central-peripheral dichotomy. In 45th Annual Meeting of the Japan Neuroscience Society (JNSS 2022) (pp. 406-407).


Cite as: https://hdl.handle.net/21.11116/0000-000A-E34B-2
Abstract
Compared to central vision, peripheral vision has not only a lower spatial sampling resolution in the retina, but also, according to the recently proposed central-peripheral dichotomy (CPD, Zhaoping 2017, 2019), has a primary role for looking rather than seeing in vision. Furthermore, for seeing (i.e., recognizing and discriminating visual objects), CPD asserts that peripheral vision has a weaker or absent feedback component in the feedforward and feedback processes along the visual pathway from the primary visual cortex (V1) to higher visual areas. Due to an attentional bottleneck assumed to start from V1's output to downstream areas (Zhaoping 2019), visual recognition in higher visual areas relies on impoverished sensory information fed forward from V1. To aid recognition in challenging or ambiguous situations, in which the perceptual outcome from viewing a scene could be one of multiple non-trivial possibilities, central vision uses feedback from higher to lower visual areas such as V1 to query for additional information. This query uses brain's internal model of the visual world to disambiguate between the possibilities for an eventual perceptual outcome. Peripheral vision, with a weaker or absent feedback query according to CPD, is therefore vulnerable to visual illusions due to misleading V1 inputs. I will show two visual illusions predicted by CPD using our knowledge about V1's neural response properties. One is called the reversed depth illusion (Zhaoping &Ackermann 2018) in perceiving the 3-dimensional depth of a surface from a viewer. The other is called the flip tilt illusion (Zhaoping 2020) in perceiving the orientation of an item in an image. Usually, both illusions are only visible peripherally. A relative of the flip tilt illusion is a surprising prediction of a parallel advantage: in a special visual search task, it is faster to find a target that is parallel rather than perpendicular to uniformly oriented nontargets (Zhaoping 2022). As in typical visual search tasks, time needed for completing the task is largely determined by looking, which is the process of deciding where in the peripheral visual field to make a saccade to, until the target is located at the saccadic destination. Hence, this predicted parallel advantage highlights the role of peripheral vision in looking. Indeed, this parallel advantage is stronger for targets at the more peripheral visual fields.