English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Learning anticipatory eye-movements for control

MPS-Authors
/persons/resource/persons83861

Chuang,  LL
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84111

Nieuwenhuizen,  FM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons192907

Walter,  J
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Chuang, L., Nieuwenhuizen, F., Walter, J., & Bülthoff, H. (2015). Learning anticipatory eye-movements for control. In C. Bermeitinger, A. Mojzisch, & W. Greve (Eds.), TeaP 2015: Abstracts of the 57th Conference of Experimental Psychologists (pp. 58). Lengerich, Germany: Pabst.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002A-474A-0
Abstract
Anticipatory eye-movements (or look-ahead fixations) are often observed in complex closed-loop control tasks, such as steering a vehicle on a non-straight path (Land & Lee, 1994). This eye-movement behavior allows the observer to switch between different visual cues that are relevant for minimizing present and future control errors (Wilkie, Wann, & Allison, 2008). Here, we asked: Are anticipatory eye-movements generic or are they acquired according to the learning environment? We trained and tested 27 participants on a control system, which simulated the simplified dynamics of a rotorcraft. Participants had to translate laterally along a specified path while maintaining a fixed altitude. Ground and vertical landmarks provided respective visual cues. Training took place under one of three possible field-of-view conditions (height x width: 60° x 60°; 60° x 180°; 125° x 180°), while testing took place in an unrestricted field-of-view environment (125° x 230°). We found that restricting the field-of-view during training significantly decreases the number of anticipatory eye-movements during testing. This effect can be largely attributed to the size of the horizontal field-of-view. Our finding suggests that anticipatory eye-movements for closed-loop control are shaped by the conditions of the training environment.