Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Deciding where to look when there's not much to see

There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Chukoskie, L., Schwartz, O., Sejnowski, T., Dayan, P., & Krauzlis, R. (2005). Deciding where to look when there's not much to see. Poster presented at 35th Annual Meeting of the Society for Neuroscience (Neuroscience 2005), Washington, DC, USA.

Cite as: http://hdl.handle.net/21.11116/0000-0005-AAC1-1
Visually-guided saccades bring items of interest onto the fovea, and have been the subject of intensive study. However, under uncertain visual conditions (e.g., fog, dark, or lack of visual structure), eye movements are guided not only by what is observable in the visual world, but also by prior experience about which locations are likely to provide information or reward. Very little is known about the planning of saccades that are guided by non-visual representations of where to look next. We therefore designed a task in which human subjects searched for a target on either a mean gray screen or a structured noise background (1/f, pink noise). The location of the target in each case did not correspond to any visual element on the screen, but was drawn from a probability distribution with a given center and spread. The subjects were asked to find the target as quickly as possible; an eye movement to the correct location was rewarded with a tone. Eye movements were measured with an ISCAN video-based eye-tracking system. We obtained pilot data from four subjects, including two naive to the purposes of the experiment. Following practice, subjects' eye movements revealed that they had learned both the location of the center of the probability distribution from which the targets were drawn, and information about its spread. Trial duration varied greatly, on average 128ms to 64s, depending on properties of the distribution and probability of the target on each trial. Different eye movement scanning strategies adopted by individual subjects led to differential learning for the pink noise versus the grey background. Subjects seemed to utilize the visual landmarks in the pink noise background despite the fact that they were uncorrelated with target location. We conclude that humans were able to build estimates of where to look even when visual cues did not provide extra information. In addition, our task offers a method for probing how prior information is integrated with visual information.