ﻻ يوجد ملخص باللغة العربية
State-of-the-art recommender systems have the ability to generate high-quality recommendations, but usually cannot provide intuitive explanations to humans due to the usage of black-box prediction models. The lack of transparency has highlighted the critical importance of improving the explainability of recommender systems. In this paper, we propose to extract causal rules from the user interaction history as post-hoc explanations for the black-box sequential recommendation mechanisms, whilst maintain the predictive accuracy of the recommendation model. Our approach firstly achieves counterfactual examples with the aid of a perturbation model, and then extracts personalized causal relationships for the recommendation model through a causal rule mining algorithm. Experiments are conducted on several state-of-the-art sequential recommendation models and real-world datasets to verify the performance of our model on generating causal explanations. Meanwhile, We evaluate the discovered causal explanations in terms of quality and fidelity, which show that compared with conventional association rules, causal rules can provide personalized and more effective explanations for the behavior of black-box recommendation models.
Providing personalized explanations for recommendations can help users to understand the underlying insight of the recommendation results, which is helpful to the effectiveness, transparency, persuasiveness and trustworthiness of recommender systems.
Explainability and effectiveness are two key aspects for building recommender systems. Prior efforts mostly focus on incorporating side information to achieve better recommendation performance. However, these methods have some weaknesses: (1) predict
Recommending appropriate algorithms to a classification problem is one of the most challenging issues in the field of data mining. The existing algorithm recommendation models are generally constructed on only one kind of meta-features by single lear
Mobile devices enable users to retrieve information at any time and any place. Considering the occasional requirements and fragmentation usage pattern of mobile users, temporal recommendation techniques are proposed to improve the efficiency of infor
System-provided explanations for recommendations are an important component towards transparent and trustworthy AI. In state-of-the-art research, this is a one-way signal, though, to improve user acceptance. In this paper, we turn the role of explana