Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

From photos to sketches-how humans and deep neural networks process objects across different levels of visual abstraction

MPG-Autoren

Singer,  Johannes
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Psychology, Ludwig Maximilians University Munich, Germany;

Seeliger,  Katja
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons242545

Hebart,  Martin N.       
Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

Singer_2022.pdf
(Verlagsversion), 2MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Singer, J., Seeliger, K., Kietzmann, T. C., & Hebart, M. N. (2022). From photos to sketches-how humans and deep neural networks process objects across different levels of visual abstraction. Journal of Vision, 22(2): 4. doi:10.1167/jov.22.2.4.


Zitierlink: https://hdl.handle.net/21.11116/0000-000B-2953-A
Zusammenfassung
Line drawings convey meaning with just a few strokes. Despite strong simplifications, humans can recognize objects depicted in such abstracted images without effort. To what degree do deep convolutional neural networks (CNNs) mirror this human ability to generalize to abstracted object images? While CNNs trained on natural images have been shown to exhibit poor classification performance on drawings, other work has demonstrated highly similar latent representations in the networks for abstracted and natural images. Here, we address these seemingly conflicting findings by analyzing the activation patterns of a CNN trained on natural images across a set of photographs, drawings, and sketches of the same objects and comparing them to human behavior. We find a highly similar representational structure across levels of visual abstraction in early and intermediate layers of the network. This similarity, however, does not translate to later stages in the network, resulting in low classification performance for drawings and sketches. We identified that texture bias in CNNs contributes to the dissimilar representational structure in late layers and the poor performance on drawings. Finally, by fine-tuning late network layers with object drawings, we show that performance can be largely restored, demonstrating the general utility of features learned on natural images in early and intermediate layers for the recognition of drawings. In conclusion, generalization to abstracted images, such as drawings, seems to be an emergent property of CNNs trained on natural images, which is, however, suppressed by domain-related biases that arise during later processing stages in the network.