Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Generating Counterfactual Explanations with Natural Language

Hendricks, L. A., Hu, R., Darrell, T., & Akata, Z. (2018). Generating Counterfactual Explanations with Natural Language. In B. Kim, K. R. Varshney, & A. Weller (Eds.), Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (pp. 95-98). Retrieved from http://arxiv.org/abs/1806.09809.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1806.09809.pdf (Preprint), 549KB
Name:
arXiv:1806.09809.pdf
Beschreibung:
File downloaded from arXiv at 2018-09-17 14:26 presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Hendricks, Lisa Anne1, Autor
Hu, Ronghang1, Autor
Darrell, Trevor1, Autor
Akata, Zeynep2, Autor           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Natural language explanations of deep neural network decisions provide an
intuitive way for a AI agent to articulate a reasoning process. Current textual
explanations learn to discuss class discriminative features in an image.
However, it is also helpful to understand which attributes might change a
classification decision if present in an image (e.g., "This is not a Scarlet
Tanager because it does not have black wings.") We call such textual
explanations counterfactual explanations, and propose an intuitive method to
generate counterfactual explanations by inspecting which evidence in an input
is missing, but might contribute to a different classification decision if
present in the image. To demonstrate our method we consider a fine-grained
image classification task in which we take as input an image and a
counterfactual class and output text which explains why the image does not
belong to a counterfactual class. We then analyze our generated counterfactual
explanations both qualitatively and quantitatively using proposed automatic
metrics.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2018-06-262018
 Publikationsstatus: Online veröffentlicht
 Seiten: 4 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1806.09809
URI: http://arxiv.org/abs/1806.09809
BibTex Citekey: Hendricks_WHI2018
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: ICML Workshop on Human Interpretability in Machine Learning
Veranstaltungsort: Stockholm, Sweden
Start-/Enddatum: 2018-07-14 - 2018-07-14

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning
  Kurztitel : WHI 2018
Genre der Quelle: Konferenzband
 Urheber:
Kim, Been1, Herausgeber
Varshney, Kush R.1, Herausgeber
Weller, Adrian1, Herausgeber
Affiliations:
1 External Organizations, ou_persistent22            
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 95 - 98 Identifikator: -