Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Perceptual learning with spatial uncertainties

There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Otto, T., Herzog, M., Fahle, M., & Zhaoping, L. (2006). Perceptual learning with spatial uncertainties. Vision Research, 46(19), 3223-3233.

Cite as: https://hdl.handle.net/21.11116/0000-0002-D419-3
In perceptual learning, stimuli are usually assumed to be presented to a constant retinal location during training. However, due to tremor, drift, and microsaccades of the eyes, the same stimulus covers different retinal positions on sequential trials. Because of these variations the mathematical decision problem changes from linear to non-linear (Zhaoping, Herzog, & Dayan, 2003). This non-linearity implies three predictions. First, varying the spatial position of a stimulus within a moderate range does not deteriorate perceptual learning. Second, improvement for one stimulus variant can yield negative transfer to other variants. Third, interleaved training with two stimulus variants yields no or strongly diminished learning. Using a bisection task, we found psychophysical evidence for the first and last prediction. However, no negative transfer was found as opposed to the second prediction.