English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

Pham, N., Schiele, B., Kortylewski, A., & Fischer, J. (2025). Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes. Retrieved from https://arxiv.org/abs/2503.13429.

Item is

Files

hide Files
:
arXiv:2503.13429.pdf (Preprint), 10MB
Name:
arXiv:2503.13429.pdf
Description:
File downloaded from arXiv at 2025-03-24 09:16
OA-Status:
Not specified
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

hide
 Creators:
Pham, Nhi1, Author           
Schiele, Bernt1, Author                 
Kortylewski, Adam2, Author                 
Fischer, Jonas1, Author           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              

Content

hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: With the rise of neural networks, especially in high-stakes applications,
these networks need two properties (i) robustness and (ii) interpretability to
ensure their safety. Recent advances in classifiers with 3D volumetric object
representations have demonstrated a greatly enhanced robustness in
out-of-distribution data. However, these 3D-aware classifiers have not been
studied from the perspective of interpretability. We introduce CAVE - Concept
Aware Volumes for Explanations - a new direction that unifies interpretability
and robustness in image classification. We design an inherently-interpretable
and robust classifier by extending existing 3D-aware classifiers with concepts
extracted from their volumetric representations for classification. In an array
of quantitative metrics for interpretability, we compare against different
concept-based approaches across the explainable AI literature and show that
CAVE discovers well-grounded concepts that are used consistently across images,
while achieving superior robustness.

Details

hide
Language(s): eng - English
 Dates: 2025-03-172025
 Publication Status: Published online
 Pages: 19 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2503.13429
URI: https://arxiv.org/abs/2503.13429
BibTex Citekey: Pham2503.13429
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show