日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

Comparing Search Strategies of Humans and Machines in Clutter

MPS-Authors
/persons/resource/persons83805

Bethge,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Michaelis, C., Weller, M., Funke, C., Ecker, A., Wallis, T., & Bethge, M. (2019). Comparing Search Strategies of Humans and Machines in Clutter. Poster presented at Nineteenth Annual Meeting of the Vision Sciences Society (VSS 2019), St. Pete Beach, FL, USA.


引用: https://hdl.handle.net/21.11116/0000-0004-BF54-7
要旨
While many perceptual tasks become more difficult in the presence of clutter, in general the human visual system has evolved tolerance to cluttered environments. In contrast, current machine learning approaches struggle in the presence of clutter. We compare human observers and CNNs on two target localization tasks with cluttered images created from characters or rendered objects. Each task sample consists of such a cluttered image as well as a separate image of one object which has to be localized. Human observers are asked to identify wether the object lies in the left or right half of the image and accuracy, reaction time and eye movements are recorded. CNNs are trained to segment the object and the position of the center of mass of the segmentation mask is then used to predict the position. Clutter levels are defined by the set-size ranging from 2 to 256 objects per image. We find that for humans processing times increase with the amount of clutter while for machine learning models accuracy drops. This points to a critical difference in human and machine processing: humans search serially whereas current machine learning models typically process a whole image in one pass. Following this line of thought we show that machine learning models with two iterations of processing perform significantly better than the purely feed-forward CNNs dominating in current object recognition applications. This finding suggests that confronted with challenging scenes iterative processing might be just as important for machines as it is for humans.