日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

How to Explain Individual Classification Decisions

MPS-Authors
/persons/resource/persons83954

Schroeter T, Harmeling,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84096

Müller,  K-R
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Baehrens, D., Schroeter T, Harmeling, S., Kawanabe M, Hansen, K., & Müller, K.-R. (2010). How to Explain Individual Classification Decisions. Journal of Machine Learning Research, 11, 1803-1831. Retrieved from http://jmlr.csail.mit.edu/papers/volume11/baehrens10a/baehrens10a.pdf.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-BF74-6
要旨
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.