Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

Pham, N., Schiele, B., Kortylewski, A., & Fischer, J. (2025). Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes. Retrieved from https://arxiv.org/abs/2503.13429.

Item is

Basisdaten

ausblenden:
Genre: Forschungspapier

Dateien

ausblenden: Dateien
:
arXiv:2503.13429.pdf (Preprint), 10MB
Name:
arXiv:2503.13429.pdf
Beschreibung:
File downloaded from arXiv at 2025-03-24 09:16
OA-Status:
Keine Angabe
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

ausblenden:
 Urheber:
Pham, Nhi1, Autor           
Schiele, Bernt1, Autor                 
Kortylewski, Adam2, Autor                 
Fischer, Jonas1, Autor           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              

Inhalt

ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: With the rise of neural networks, especially in high-stakes applications,
these networks need two properties (i) robustness and (ii) interpretability to
ensure their safety. Recent advances in classifiers with 3D volumetric object
representations have demonstrated a greatly enhanced robustness in
out-of-distribution data. However, these 3D-aware classifiers have not been
studied from the perspective of interpretability. We introduce CAVE - Concept
Aware Volumes for Explanations - a new direction that unifies interpretability
and robustness in image classification. We design an inherently-interpretable
and robust classifier by extending existing 3D-aware classifiers with concepts
extracted from their volumetric representations for classification. In an array
of quantitative metrics for interpretability, we compare against different
concept-based approaches across the explainable AI literature and show that
CAVE discovers well-grounded concepts that are used consistently across images,
while achieving superior robustness.

Details

ausblenden:
Sprache(n): eng - English
 Datum: 2025-03-172025
 Publikationsstatus: Online veröffentlicht
 Seiten: 19 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2503.13429
URI: https://arxiv.org/abs/2503.13429
BibTex Citekey: Pham2503.13429
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: