日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Generating Counterfactual Explanations with Natural Language

MPS-Authors
/persons/resource/persons127761

Akata,  Zeynep
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1806.09809.pdf
(プレプリント), 549KB

付随資料 (公開)
There is no public supplementary material available
引用

Hendricks, L. A., Hu, R., Darrell, T., & Akata, Z. (2018). Generating Counterfactual Explanations with Natural Language. In B., Kim, K. R., Varshney, & A., Weller (Eds.), Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (pp. 95-98). Retrieved from http://arxiv.org/abs/1806.09809.


引用: https://hdl.handle.net/21.11116/0000-0002-18CE-C
要旨
Natural language explanations of deep neural network decisions provide an
intuitive way for a AI agent to articulate a reasoning process. Current textual
explanations learn to discuss class discriminative features in an image.
However, it is also helpful to understand which attributes might change a
classification decision if present in an image (e.g., "This is not a Scarlet
Tanager because it does not have black wings.") We call such textual
explanations counterfactual explanations, and propose an intuitive method to
generate counterfactual explanations by inspecting which evidence in an input
is missing, but might contribute to a different classification decision if
present in the image. To demonstrate our method we consider a fine-grained
image classification task in which we take as input an image and a
counterfactual class and output text which explains why the image does not
belong to a counterfactual class. We then analyze our generated counterfactual
explanations both qualitatively and quantitatively using proposed automatic
metrics.