Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Measuring the saliency of an invisible visual feature and its interaction with visible features

MPG-Autoren
/persons/resource/persons245559

Zou,  J
Department of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons226321

Zhaoping,  L
Department of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Zou, J., & Zhaoping, L. (2021). Measuring the saliency of an invisible visual feature and its interaction with visible features. Poster presented at Twenty-First Annual Meeting of the Vision Sciences Society (V-VSS 2021). doi:10.1167/jov.21.9.2930.


Zitierlink: https://hdl.handle.net/21.11116/0000-0008-A0DD-A
Zusammenfassung
A single object presented to one eye among many other identical objects presented to the other eye – an ocularity singleton – is salient to attract visual attention automatically. Saliency from ocularity contrast helps rapidly localize the foreground, especially in 3D visual scenes. However, unlike saliency by other feature dimensions, e.g., color (C) and orientation (O), uniqueness by ocularity (E, eye-of-origin) alone is perceptually invisible, making it difficult to be quantified. I.e., the reaction time to detect an ocularity singleton – RT(E) – remains unknown. Quantitative measures could help further investigate the interaction between saliency by ocularity and by other features, unfolding its neural mechanisms. In the current study, RTs were measured in a search task for a unique bar among many background bars with identical C, O and E features. The target bar was unique in either C or O alone, or unique simultaneously in two or three feature dimensions: CO, CE, EO, CEO. Importantly, with a quantitative model derived from the V1 Saliency Hypothesis (V1SH), which links saliency with neural activities of primate V1, RT(E) was then robustly calculated from RT(C), RT(O), RT(CO), RT(CE), RT(EO) and RT(CEO). Furthermore, by V1SH, whether RT(CE) is shorter than the RTs of the winner of a race model involving RT(E) and RT(C) reflects whether there are V1 neurons tuned conjunctively to both E and C – monocular neurons tuned to color – that contribute to saliency. Analogously, RT(EO) sheds light on monocular neurons tuned to orientation. We show that RT(CE) and RT(EO) are shorter than that of the race winner between single-feature RTs, suggesting a contribution by CE and EO neurons. However, this applies only to search among red rather than green background bars, suggesting an intrinsic color asymmetry for saliency interaction with ocularity.