English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Textual Explanations for Self-Driving Vehicles

Kim, J., Rohrbach, A., Darrell, T., Canny, J., & Akata, Z. (2018). Textual Explanations for Self-Driving Vehicles. In V. Ferrari, M. Hebert, C. Sminchisescu, & Y. Weiss (Eds.), Computer Vision -- ECCV 2018 (pp. 577-593). Berlin: Springer. doi:10.1007/978-3-030-01216-8_35.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Kim, Jinkyu1, Author
Rohrbach, Anna2, Author           
Darrell, Trevor1, Author
Canny, John1, Author
Akata, Zeynep2, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Content

show
hide
Free keywords: Explainable Deep Driving, BDD-X dataset
 Abstract: Deep neural perception and control networks have become key com-
ponents of self-driving vehicles. User acceptance is likely to benefit from easy-
to-interpret textual explanations which allow end-users to understand what trig-
gered a particular behavior. Explanations may be triggered by the neural con-
troller, namely
introspective explanations
, or informed by the neural controller’s
output, namely
rationalizations
. We propose a new approach to introspective ex-
planations which consists of two parts. First, we use a visual (spatial) attention
model to train a convolutional network end-to-end from images to the vehicle
control commands,
i
.
e
., acceleration and change of course. The controller’s at-
tention identifies image regions that potentially influence the network’s output.
Second, we use an attention-based video-to-text model to produce textual ex-
planations of model actions. The attention maps of controller and explanation
model are aligned so that explanations are grounded in the parts of the scene that
mattered to the controller. We explore two approaches to attention alignment,
strong- and weak-alignment. Finally, we explore a version of our model that
generates rationalizations, and compare with introspective explanations on the
same video segments. We evaluate these models on a novel driving dataset with
ground-truth human explanations, the Berkeley DeepDrive eXplanation (BDD-
X) dataset. Code is available at
https://github.com/JinkyuKimUCB/explainable-deep-driving

Details

show
hide
Language(s): eng - English
 Dates: 20182018-07-302018
 Publication Status: Issued
 Pages: 24 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: akataECCV18
DOI: 10.1007/978-3-030-01216-8_35
 Degree: -

Event

show
hide
Title: 15th European Conference on Computer Vision
Place of Event: Munich, Germany
Start-/End Date: 2018-09-08 - 2018-09-14

Legal Case

show

Project information

show

Source 1

show
hide
Title: Computer Vision -- ECCV 2018
  Abbreviation : ECCV 2018
  Subtitle : 15th European Conference ; Munich, Germany, September 8-14, 2018 ; Proceedings, Part II
Source Genre: Proceedings
 Creator(s):
Ferrari, Vittorio1, Editor
Hebert, Martial1, Editor
Sminchisescu, Cristian1, Editor
Weiss, Yair1, Editor
Affiliations:
1 External Organizations, ou_persistent22            
Publ. Info: Berlin : Springer
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 577 - 593 Identifier: ISBN: 978-3-030-01215-1

Source 2

show
hide
Title: Lecture Notes in Computer Science
  Abbreviation : LNCS
Source Genre: Series
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 11206 Sequence Number: - Start / End Page: - Identifier: -