hide
Free keywords:
-
Abstract:
After building a classifier with modern tools of machine learning we typically have a black box at
hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the
most likely label of a given unseen data point. However, most methods will provide no answer why
the model predicted a particular label for a single instance and what features were most influential
for that particular instance. The only method that is currently able to provide such explanations are
decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to
explain the decisions of any classification method.