English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Learning explanations that are hard to vary

MPS-Authors
/persons/resource/persons216019

Gresele,  L
Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Parascandolo, G., Neitz, A., Orvieto, A., Gresele, L., & Schölkopf, B. (2021). Learning explanations that are hard to vary. In Ninth International Conference on Learning Representations (ICLR 2021).


Cite as: https://hdl.handle.net/21.11116/0000-0006-F4F2-5
Abstract
In this paper, we investigate the principle that good explanations are hard to vary in the context of deep learning.
We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances.
To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled.
We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks.
Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.