English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Counterfactual Explanations for Neural Recommenders

Tran, K. H., Ghazimatin, A., & Saha Roy, R. (2021). Counterfactual Explanations for Neural Recommenders. Retrieved from https://arxiv.org/abs/2105.05008.

Item is

Files

show Files
hide Files
:
arXiv:2105.05008.pdf (Preprint), 2MB
Name:
arXiv:2105.05008.pdf
Description:
File downloaded from arXiv at 2021-10-26 13:38 SIGIR 2021 Short Paper, 5 pages
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Tran, Khanh Hiep1, Author           
Ghazimatin, Azin1, Author           
Saha Roy, Rishiraj1, Author           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              

Content

show
hide
Free keywords: Computer Science, Information Retrieval, cs.IR,Computer Science, Learning, cs.LG
 Abstract: Understanding why specific items are recommended to users can significantly
increase their trust and satisfaction in the system. While neural recommenders
have become the state-of-the-art in recent years, the complexity of deep models
still makes the generation of tangible explanations for end users a challenging
problem. Existing methods are usually based on attention distributions over a
variety of features, which are still questionable regarding their suitability
as explanations, and rather unwieldy to grasp for an end user. Counterfactual
explanations based on a small set of the user's own actions have been shown to
be an acceptable solution to the tangibility problem. However, current work on
such counterfactuals cannot be readily applied to neural models. In this work,
we propose ACCENT, the first general framework for finding counterfactual
explanations for neural recommenders. It extends recently-proposed influence
functions for identifying training points most relevant to a recommendation,
from a single to a pair of items, while deducing a counterfactual set in an
iterative process. We use ACCENT to generate counterfactual explanations for
two popular neural models, Neural Collaborative Filtering (NCF) and Relational
Collaborative Filtering (RCF), and demonstrate its feasibility on a sample of
the popular MovieLens 100K dataset.

Details

show
hide
Language(s): eng - English
 Dates: 2021-05-112021
 Publication Status: Published online
 Pages: 5 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2105.05008
URI: https://arxiv.org/abs/2105.05008
BibTex Citekey: Tran_2105.05008
 Degree: -

Event

show

Legal Case

show

Project information

show hide
Project name : imPACT
Grant ID : 610150
Funding program : Funding Programme 7 (FP7)
Funding organization : European Commission (EC)

Source

show