Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

Created by MG96

External Public cs.CV

Statistics

Citations
0
References
0
Last updated
Loading...
Authors

Nhi Pham Bernt Schiele Adam Kortylewski Jonas Fischer
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
Abstract

With the rise of neural networks, especially in high-stakes applications, these networks need two properties (i) robustness and (ii) interpretability to ensure their safety. Recent advances in classifiers with 3D volumetric object representations have demonstrated a greatly enhanced robustness in out-of-distribution data. However, these 3D-aware classifiers have not been studied from the perspective of interpretability. We introduce CAVE - Concept Aware Volumes for Explanations - a new direction that unifies interpretability and robustness in image classification. We design an inherently-interpretable and robust classifier by extending existing 3D-aware classifiers with concepts extracted from their volumetric representations for classification. In an array of quantitative metrics for interpretability, we compare against different concept-based approaches across the explainable AI literature and show that CAVE discovers well-grounded concepts that are used consistently across images, while achieving superior robustness.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.