English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  From photos to sketches-how humans and deep neural networks process objects across different levels of visual abstraction

Singer, J., Seeliger, K., Kietzmann, T. C., & Hebart, M. N. (2022). From photos to sketches-how humans and deep neural networks process objects across different levels of visual abstraction. Journal of Vision, 22(2): 4. doi:10.1167/jov.22.2.4.

Item is

Files

show Files
hide Files
:
Singer_2022.pdf (Publisher version), 2MB
Name:
Singer_2022.pdf
Description:
-
OA-Status:
Gold
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Singer, Johannes1, 2, Author
Seeliger, Katja1, Author
Kietzmann, Tim C.3, Author
Hebart, Martin N.1, Author                 
Affiliations:
1Max Planck Research Group Vision and Computational Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_3158378              
2Department of Psychology, Ludwig Maximilians University Munich, Germany, ou_persistent22              
3Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Line drawings convey meaning with just a few strokes. Despite strong simplifications, humans can recognize objects depicted in such abstracted images without effort. To what degree do deep convolutional neural networks (CNNs) mirror this human ability to generalize to abstracted object images? While CNNs trained on natural images have been shown to exhibit poor classification performance on drawings, other work has demonstrated highly similar latent representations in the networks for abstracted and natural images. Here, we address these seemingly conflicting findings by analyzing the activation patterns of a CNN trained on natural images across a set of photographs, drawings, and sketches of the same objects and comparing them to human behavior. We find a highly similar representational structure across levels of visual abstraction in early and intermediate layers of the network. This similarity, however, does not translate to later stages in the network, resulting in low classification performance for drawings and sketches. We identified that texture bias in CNNs contributes to the dissimilar representational structure in late layers and the poor performance on drawings. Finally, by fine-tuning late network layers with object drawings, we show that performance can be largely restored, demonstrating the general utility of features learned on natural images in early and intermediate layers for the recognition of drawings. In conclusion, generalization to abstracted images, such as drawings, seems to be an emergent property of CNNs trained on natural images, which is, however, suppressed by domain-related biases that arise during later processing stages in the network.

Details

show
hide
Language(s): eng - English
 Dates: 2022-02-01
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1167/jov.22.2.4
PMID: 35129578
PMC: PMC8822363
 Degree: -

Event

show

Legal Case

show

Project information

show hide
Project name : -
Grant ID : -
Funding program : -
Funding organization : Max Planck Society

Source 1

show
hide
Title: Journal of Vision
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Charlottesville, VA : Scholar One, Inc.
Pages: - Volume / Issue: 22 (2) Sequence Number: 4 Start / End Page: - Identifier: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050