English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Generating Counterfactual Explanations with Natural Language

MPS-Authors
/persons/resource/persons127761

Akata,  Zeynep
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1806.09809.pdf
(Preprint), 549KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Hendricks, L. A., Hu, R., Darrell, T., & Akata, Z. (2018). Generating Counterfactual Explanations with Natural Language. In B. Kim, K. R. Varshney, & A. Weller (Eds.), Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning. Retrieved from http://arxiv.org/abs/1806.09809.


Cite as: https://hdl.handle.net/21.11116/0000-0002-18CE-C
Abstract
Natural language explanations of deep neural network decisions provide an intuitive way for a AI agent to articulate a reasoning process. Current textual explanations learn to discuss class discriminative features in an image. However, it is also helpful to understand which attributes might change a classification decision if present in an image (e.g., "This is not a Scarlet Tanager because it does not have black wings.") We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute to a different classification decision if present in the image. To demonstrate our method we consider a fine-grained image classification task in which we take as input an image and a counterfactual class and output text which explains why the image does not belong to a counterfactual class. We then analyze our generated counterfactual explanations both qualitatively and quantitatively using proposed automatic metrics.