English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

A relative encoding approach to modeling Spatiotemporal Boundary Formation

MPS-Authors
/persons/resource/persons83870

Cunningham,  DW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83943

Graf,  ABA
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Cunningham, D., Graf, A., & Bülthoff, H. (2002). A relative encoding approach to modeling Spatiotemporal Boundary Formation. Poster presented at Second Annual Meeting of the Vision Sciences Society (VSS 2002), Sarasota, FL, USA.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-DE69-E
Abstract
When a camouflaged animal sits in front of the appropriate background, the animal is effectively invisible. As soon as the animal moves, however, it is easily visible despite the fact that there is still no static shape information. Its shape is perceived solely by the pattern of changes over time. This process, referred to as Spatiotemporal Boundary Formation (SBF), can be initiated by a wide range of texture transformations, including changes in the visibility, shape, or color of individual texture elements. Shipley and colleagues have gathered a wealth of psychophysical data on SBF, and have presented a mathematical proof of how the orientation of local edge segments (LESs) can be recovered from as few as 3 element changes (Shipley and Kellman, 1997). Here, we extend this proof to the extraction of global form and motion. More specifically, we present a model that recovers the orientation of the LESs from a dataset consisting of the relative spatiotemporal location of the element changes. The recovered orientations of as few as 2 LESs can then be used to extract the global motion, which is then used to determine the relative spatiotemporal location and minimal length of the LESs. Computational simulations show that the model captures the major psychophysical aspects of SBF, including a dependency on the spatiotemporal density of element changes, a sensitivity to spurious changes, an ability to extract more than one figure at a time, and a tolerance for a non-constant global motion. Unlike Shipley and Kellman's earlier proof, which required that pairs of element changes be represented as local motion vectors, the present model merely encodes the relative spatiotemporal locations of the changes. This usage of a relative encoding scheme yields several emergent properties that are strikingly similar to the perception of aperture viewed figures (Anorthoscopic perception). This offering the possibility of unifying the two phenomena within a single mathematical model.