日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

Moving objects in ultra-rapid visual categorisation result in better accuracy, but slower reaction times than static presentations

MPS-Authors
/persons/resource/persons84291

Vuong,  QC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84258

Thornton,  IM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Kirchner, H., Vuong, Q., Thorpe, S., & Thornton, I. (2005). Moving objects in ultra-rapid visual categorisation result in better accuracy, but slower reaction times than static presentations. Poster presented at 8th Tübinger Wahrnehmungskonferenz (TWK 2005), Tübingen, Germany.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-D659-E
要旨
Ultra-rapid categorisation studies have analysed human responses to briefly flashed, static natural scenes in order to determine the time needed to process different kinds of visual objects. Recently, Kirchner and Thorpe reported that reaction times can be extremely fast if subjects are asked to move their eyes to the side where an animal had appeared. Accuracy was remarkably good with the fastest reliable saccades occurring in only 130 ms after stimulus onset. Vuong and colleagues in a 2AFC task with apparent motion displays and manual responses further
showed that humans can be detected more easily than machines. In the present study we combined the two approaches in order to determine the processing speed of static vs. dynamic displays. In blocked conditions, human subjects were asked to detect either an animal or a
machine which in half of the trials were presented either static or in apparent motion. On each trial, an animal and a machine were presented simultaneously on the left and right of fixation, and the subjects were asked to make a saccade or to press a button at the target side. Manual
responses and saccadic eye movements both resulted in good accuracy, while reaction times to animals were significantly faster than to machines. Only saccadic eye movements showed a clear advantage of dynamic over static trials in accuracy, but the analysis of mean reaction
times pointed to a speed-accuracy trade-off. This might be explained by different response modes as seen in the latency distributions. We conclude that form processing can be improved by stimulus motion, but the speed of this process can be observed much more directly in eye
movement latencies as compared to manual responses.