English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Optimising for Interpretability: Convolutional Dynamic Alignment Networks

Böhle, M. D., Fritz, M., & Schiele, B. (2021). Optimising for Interpretability: Convolutional Dynamic Alignment Networks. Retrieved from https://arxiv.org/abs/2109.13004.

Item is

Files

show Files
hide Files
:
arXiv:2109.13004.pdf (Preprint), 14MB
Name:
arXiv:2109.13004.pdf
Description:
File downloaded from arXiv at 2021-11-22 09:59 arXiv admin note: substantial text overlap with arXiv:2104.00032
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Böhle, Moritz Daniel1, Author           
Fritz, Mario2, Author           
Schiele, Bernt1, Author           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Statistics, Machine Learning, stat.ML,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG
 Abstract: We introduce a new family of neural network models called Convolutional
Dynamic Alignment Networks (CoDA Nets), which are performant classifiers with a
high degree of inherent interpretability. Their core building blocks are
Dynamic Alignment Units (DAUs), which are optimised to transform their inputs
with dynamically computed weight vectors that align with task-relevant
patterns. As a result, CoDA Nets model the classification prediction through a
series of input-dependent linear transformations, allowing for linear
decomposition of the output into individual input contributions. Given the
alignment of the DAUs, the resulting contribution maps align with
discriminative input patterns. These model-inherent decompositions are of high
visual quality and outperform existing attribution methods under quantitative
metrics. Further, CoDA Nets constitute performant classifiers, achieving on par
results to ResNet and VGG models on e.g. CIFAR-10 and TinyImagenet. Lastly,
CoDA Nets can be combined with conventional neural network models to yield
powerful classifiers that more easily scale to complex datasets such as
Imagenet whilst exhibiting an increased interpretable depth, i.e., the output
can be explained well in terms of contributions from intermediate layers within
the network.

Details

show
hide
Language(s): eng - English
 Dates: 2021-09-272021
 Publication Status: Published online
 Pages: 29 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2109.13004
URI: https://arxiv.org/abs/2109.13004
BibTex Citekey: Boehle2109.13004
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show