English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Interpretability Beyond Classification Output: Semantic Bottleneck Networks

Losch, M., Fritz, M., & Schiele, B. (2019). Interpretability Beyond Classification Output: Semantic Bottleneck Networks. Retrieved from http://arxiv.org/abs/1907.10882.

Item is

Basic

show hide
Genre: Paper
Latex : Interpretability Beyond Classification Output: {S}emantic Bottleneck Networks

Files

show Files
hide Files
:
arXiv:1907.10882.pdf (Preprint), 4MB
Name:
arXiv:1907.10882.pdf
Description:
File downloaded from arXiv at 2019-12-09 12:09 Correct figures in appendix
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Losch, Max1, Author           
Fritz, Mario2, Author           
Schiele, Bernt1, Author                 
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG
 Abstract: Today's deep learning systems deliver high performance based on end-to-end
training. While they deliver strong performance, these systems are hard to
interpret. To address this issue, we propose Semantic Bottleneck Networks
(SBN): deep networks with semantically interpretable intermediate layers that
all downstream results are based on. As a consequence, the analysis on what the
final prediction is based on is transparent to the engineer and failure cases
and modes can be analyzed and avoided by high-level reasoning. We present a
case study on street scene segmentation to demonstrate the feasibility and
power of SBN. In particular, we start from a well performing classic deep
network which we adapt to house a SB-Layer containing task related semantic
concepts (such as object-parts and materials). Importantly, we can recover
state of the art performance despite a drastic dimensionality reduction from
1000s (non-semantic feature) to 10s (semantic concept) channels. Additionally
we show how the activations of the SB-Layer can be used for both the
interpretation of failure cases of the network as well as for confidence
prediction of the resulting output. For the first time, e.g., we show
interpretable segmentation results for most predictions at over 99% accuracy.

Details

show
hide
Language(s): eng - English
 Dates: 2019-07-252019-07-282019
 Publication Status: Published online
 Pages: 16 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1907.10882
URI: http://arxiv.org/abs/1907.10882
BibTex Citekey: Losch2019
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show