Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Eye Movements in Shape Categorization


Tanner,  TG
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Tanner, T. (2006). Eye Movements in Shape Categorization. Poster presented at 9th Tübingen Perception Conference (TWK 2006), Tübingen, Germany.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D299-F
The objects that we observe in daily life are not generated by random processes. In general objects inherit their appearance from some coherent generative procedure (e.g., they result
from biological growth or they are manufactured for a particular purpose etc.). Consequently,
different objects are physically and functionally related to one another. Humans use concepts
to represent the relationship between different objects, in order to recognize and categorize
them, and to make categorical inferences (i.e. predict properties that are not directly observable
but which can be inferred from experience with objects of a certain class).
Depending on the subordinate categorization [1] task and context some features of objects
are more diagnostic than others. Most models of categorization silently assume that all relevant
features of an object are represented before a category decision is made, and include
attentional weights for the different dimensions. We hypothesize that humans selectively sample
the observable features in the order of subjective informativeness (esp. cue diagnosticity
and availability [2]) in order to make a fast and accurate decision. This would imply that objects
don’t need to be represented completely and that categorization happens during perception. By
tracking the sampling process we could learn more about the informativeness of features in a
given context and task.
The task was to learn to categorize novel stimuli into classes forming partially overlapping
clusters in a common feature space. Stimuli were generated from (probabilistic) generative
models of piecewise NURBS curves forming the closed contours of novel 2D shapes. The
curvatures at certain sufficiently distant control points were used as the features in which the
classes differ. The other control points were kept the same for all stimuli. The classes specified
multi-dimensional Gaussian distributions in this feature space. The overlap of the distributions
along each dimension therefore determined its diagnosticity. On each trial feedback was given
whether the correct class was selected. As the task cannot be solved perfectly by definition, the
experiment was terminated when the learning curve showed no more significant improvements.
Subjects were instructed to make as fast and as accurate decisions as possible. Eye movements
as a form of overt attention were recorded to track the feature sampling process.
We discuss the results of ongoing experiments, esp. under which circumstances and how
well humans can learn to solve this task, and how the target locations and sequence of eye
movements are related to their performance. The results are compared to a model of an ideal
Bayesian learner sampling the features in an order of maximizing information gain given its
current knowledge about the task.