Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Pointwise Representational Similarity

Kolling, C., Speicher, T., Nanda, V., Toneva, M., & Gummadi, K. (2023). Pointwise Representational Similarity. Retrieved from https://arxiv.org/abs/2305.19294.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2305.19294.pdf (Preprint), 5MB
Name:
arXiv:2305.19294.pdf
Beschreibung:
File downloaded from arXiv at 2023-07-10 10:37
OA-Status:
Keine Angabe
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Kolling, Camila1, Autor           
Speicher, Till1, Autor           
Nanda, Vedant1, Autor           
Toneva, Mariya2, Autor           
Gummadi, Krishna1, Autor           
Affiliations:
1Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              
2Group M. Toneva, Max Planck Institute for Software Systems, Max Planck Society, ou_3444531              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Learning, cs.LG
 Zusammenfassung: With the increasing reliance on deep neural networks, it is important to
develop ways to better understand their learned representations. Representation
similarity measures have emerged as a popular tool for examining learned
representations However, existing measures only provide aggregate estimates of
similarity at a global level, i.e. over a set of representations for N input
examples. As such, these measures are not well-suited for investigating
representations at a local level, i.e. representations of a single input
example. Local similarity measures are needed, for instance, to understand
which individual input representations are affected by training interventions
to models (e.g. to be more fair and unbiased) or are at greater risk of being
misclassified. In this work, we fill in this gap and propose Pointwise
Normalized Kernel Alignment (PNKA), a measure that quantifies how similarly an
individual input is represented in two representation spaces. Intuitively, PNKA
compares the similarity of an input's neighborhoods across both spaces. Using
our measure, we are able to analyze properties of learned representations at a
finer granularity than what was previously possible. Concretely, we show how
PNKA can be leveraged to develop a deeper understanding of (a) the input
examples that are likely to be misclassified, (b) the concepts encoded by
(individual) neurons in a layer, and (c) the effects of fairness interventions
on learned representations.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2023-05-302023
 Publikationsstatus: Online veröffentlicht
 Seiten: 33 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2305.19294
URI: https://arxiv.org/abs/2305.19294
BibTex Citekey: Kolling2305.19294
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: