English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Serial reproduction reveals the geometry of visuospatial representations

MPS-Authors

Langlois,  Thomas A.
Research Group Computational Auditory Perception, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
Department of Psychology, University of California;
Department of Computer Science, Princeton University;

/persons/resource/persons242173

Jacoby,  Nori
Research Group Computational Auditory Perception, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
The Center for Science and Society, Columbia University;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

21-cap-lan-01-serial.pdf
(Publisher version), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Langlois, T. A., Jacoby, N., Suchow, J. W., & Griffiths, T. L. (2021). Serial reproduction reveals the geometry of visuospatial representations. Proceedings of the National Academy of Sciences of the United States of America, 118(13): e2012938118. doi:10.1073/pnas.2012938118.


Cite as: https://hdl.handle.net/21.11116/0000-0008-9240-A
Abstract
An essential function of the human visual system is to locate objects in space and navigate the environment. Due to limited resources, the visual system achieves this by combining imperfect sensory information with a belief state about locations in a scene, resulting in systematic distortions and biases. These biases can be captured by a Bayesian model in which internal beliefs are expressed in a prior probability distribution over locations in a scene. We introduce a paradigm that enables us to measure these priors by iterating a simple memory task where the response of one participant becomes the stimulus for the next. This approach reveals an unprecedented richness and level of detail in these priors, suggesting a different way to think about biases in spatial memory. A prior distribution on locations in a visual scene can reflect the selective allocation of coding resources to different visual regions during encoding (“efficient encoding”). This selective allocation predicts that locations in the scene will be encoded with variable precision, in contrast to previous work that has assumed fixed encoding precision regardless of location. We demonstrate that perceptual biases covary with variations in discrimination accuracy, a finding that is aligned with simulations of our efficient encoding model but not the traditional fixed encoding view. This work demonstrates the promise of using nonparametric data-driven approaches that combine crowdsourcing with the careful curation of information transmission within social networks to reveal the hidden structure of shared visual representations.