Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

How to Explain Individual Classification Decisions

MPG-Autoren
/persons/resource/persons83954

Harmeling,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to Explain Individual Classification Decisions. Journal of Machine Learning Research, 11, 1803-1831.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-BF74-6
Zusammenfassung
After building a classifier with modern tools of machine learning we typically have a black box at
hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the
most likely label of a given unseen data point. However, most methods will provide no answer why
the model predicted a particular label for a single instance and what features were most influential
for that particular instance. The only method that is currently able to provide such explanations are
decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to
explain the decisions of any classification method.