Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Predicting point-light actions in real-time

There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Graf, M., Reitzner, B., Corves, C., Casile, A., Giese, M., & Prinz, W. (2007). Predicting point-light actions in real-time. NeuroImage, 36(Supplement 2), T22-T32. doi:10.1016/j.neuroimage.2007.03.017.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CDBF-2
There is convincing evidence for a mirror system in humans which simulates actions of conspecifics. One possible purpose of such a simulation system is to support action prediction in real-time. Our goal was to study whether the prediction of actions involves a real-time simulation process. We motion-captured a number of human actions and rendered them as previous termpoint-lightnext term action sequences. Observers perceived brief videos of these actions, followed by an occluder and a static test posture. We independently varied the occluder time and the movement gap (i.e., the time between the endpoint of the action and the test posture). Observers were required to judge whether the test stimulus depicted a continuation of the action in the same depth orientation. Prediction performance was best when occluder time and movement gap corresponded, i.e., when the test posture was a continuation of the sequence that matched the occluder duration (Experiments 1, 2 and 4). This pattern of results was destroyed when the sequences and test images were flipped around the horizontal axis (Experiment 3). Overall, our findings suggest that action prediction involves a simulation process that operates in real-time. This process can break down when the actions are presented under viewing conditions for which observers have little experience.