Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

ImageNet suffers from dichotomous data difficulty

MPG-Autoren
Es sind keine MPG-Autoren in der Publikation vorhanden
Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Meding, K., Schulze Buschoff, L., Geirhos, R., & Wichmann, F. (2021). ImageNet suffers from dichotomous data difficulty. In NeurIPS 2021 Workshop on ImageNet: past, present, and future (pp. 1-27).


Zitierlink: https://hdl.handle.net/21.11116/0000-0009-C46C-1
Zusammenfassung
"The power of a generalization system follows directly from its biases" (Mitchell 1980). Today, CNNs are incredibly powerful generalisation systems---but to what degree have we understood how their inductive bias influences model decisions? We here attempt to disentangle the various aspects that determine how a model decides. In particular, we ask: what makes one model decide differently from another? In a meticulously controlled setting, we find that (1.) irrespective of the network architecture or objective (e.g. self-supervised, semi-supervised, vision transformers, recurrent models) all models end up with a similar decision boundary. (2.) To understand these findings, we analysed model decisions on the ImageNet validation set from epoch to epoch and image by image. We find that the ImageNet validation set suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46.3% "trivial" and 11.3% "impossible" images. Only 42.4% of the images are responsible for the differences between two models' decision boundaries. The impossible images are not driven by label errors. (3.) Finally, humans are highly accurate at predicting which images are "trivial" and "impossible" for CNNs (81.4%). Taken together, it appears that ImageNet suffers from dichotomous data difficulty. This implies that in future comparisons of brains, machines and behaviour, much may be gained from investigating the decisive role of images and the distribution of their difficulties.