English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Trivial or impossible: dichotomous data difficulty masks model differences (on ImageNet and beyond)

Meding, K., Schulze Buschoff, L., Geirhos, R., & Wichmann, F. (submitted). Trivial or impossible: dichotomous data difficulty masks model differences (on ImageNet and beyond).

Item is

Files

show Files

Locators

show
hide
Locator:
https://arxiv.org/pdf/2110.05922 (Table of contents)
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Meding, K, Author
Schulze Buschoff, L1, Author           
Geirhos, R, Author
Wichmann, FA, Author           
Affiliations:
1External Organizations, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: "The power of a generalization system follows directly from its biases" (Mitchell 1980). Today, CNNs are incredibly powerful generalisation systems -- but to what degree have we understood how their inductive bias influences model decisions? We here attempt to disentangle the various aspects that determine how a model decides. In particular, we ask: what makes one model decide differently from another? In a meticulously controlled setting, we find that (1.) irrespective of the network architecture or objective (e.g. self-supervised, semi-supervised, vision transformers, recurrent models) all models end up with a similar decision boundary. (2.) To understand these findings, we analysed model decisions on the ImageNet validation set from epoch to epoch and image by image. We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46.0% "trivial" and 11.5% "impossible" images (beyond label errors). Only 42.5% of the images could possibly be responsible for the differences between two models' decision boundaries. (3.) Only removing the "impossible" and "trivial" images allows us to see pronounced differences between models. (4.) Humans are highly accurate at predicting which images are "trivial" and "impossible" for CNNs (81.4%). This implies that in future comparisons of brains, machines and behaviour, much may be gained from investigating the decisive role of images and the distribution of their difficulties.

Details

show
hide
Language(s):
 Dates: 2021-10
 Publication Status: Submitted
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show