English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems

MPS-Authors
/persons/resource/persons208714

Ghazimatin,  Azin
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons228305

Balalau,  Oana
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons185343

Saha Roy,  Rishiraj
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)

arXiv:1911.08378.pdf
(Preprint), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Ghazimatin, A., Balalau, O., Saha Roy, R., & Weikum, G. (2019). PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems. Retrieved from http://arxiv.org/abs/1911.08378.


Cite as: http://hdl.handle.net/21.11116/0000-0005-8415-E
Abstract
Interpretable explanations for recommender systems and other machine learning models are crucial to gain user trust. Prior works that have focused on paths connecting users and items in a heterogeneous network have several limitations, such as discovering relationships rather than true explanations, or disregarding other users' privacy. In this work, we take a fresh perspective, and present PRINCE: a provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item. Given a recommendation, PRINCE uses a polynomial-time optimal algorithm for finding this minimal set of a user's actions from an exponential search space, based on random walks over dynamic graphs. Experiments on two real-world datasets show that PRINCE provides more compact explanations than intuitive baselines, and insights from a crowdsourced user-study demonstrate the viability of such action-based explanations. We thus posit that PRINCE produces scrutable, actionable, and concise explanations, owing to its use of counterfactual evidence, a user's own actions, and minimal sets, respectively.