English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

How to Explain Individual Classification Decisions

MPS-Authors
/persons/resource/persons83954

Harmeling,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to Explain Individual Classification Decisions. Journal of Machine Learning Research, 11, 1803-1831.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-BF74-6
Abstract
After building a classifier with modern tools of machine learning we typically have a black box at
hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the
most likely label of a given unseen data point. However, most methods will provide no answer why
the model predicted a particular label for a single instance and what features were most influential
for that particular instance. The only method that is currently able to provide such explanations are
decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to
explain the decisions of any classification method.