论文标题
反事实推荐的加强路径推理
Reinforced Path Reasoning for Counterfactual Explainable Recommendation
论文作者
论文摘要
反事实解释通过探索项目或用户的最小变化如何影响建议决策来解释建议机制。现有的反事实解释方法面临巨大的搜索空间,其解释是基于操作的(例如,用户点击)或基于方面的(即项目描述)。我们认为,基于项目属性的解释对用户来说更直观和有说服力,因为他们通过细粒度的项目人口统计特征(例如品牌)来解释。此外,反事实解释可以通过滤除负面项目来增强建议。 在这项工作中,我们提出了一种新颖的反事实解释建议(CEREC),以生成基于项目属性的反事实解释,同时提高建议性能。我们的CEREC优化了在强化学习环境中统一搜索候选人反事实的解释政策。通过使用给定知识图的丰富上下文信息,我们使用自适应路径采样器减少了巨大的搜索空间。我们还将解释政策部署到建议模型中以增强建议。广泛的解释性和建议评估表明,CEREC提供与用户偏好一致并维持改进建议一致的解释的能力。我们在https://github.com/chrystalii/cerec上发布代码。
Counterfactual explanations interpret the recommendation mechanism via exploring how minimal alterations on items or users affect the recommendation decisions. Existing counterfactual explainable approaches face huge search space and their explanations are either action-based (e.g., user click) or aspect-based (i.e., item description). We believe item attribute-based explanations are more intuitive and persuadable for users since they explain by fine-grained item demographic features (e.g., brand). Moreover, counterfactual explanation could enhance recommendations by filtering out negative items. In this work, we propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations meanwhile to boost recommendation performance. Our CERec optimizes an explanation policy upon uniformly searching candidate counterfactuals within a reinforcement learning environment. We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph. We also deploy the explanation policy to a recommendation model to enhance the recommendation. Extensive explainability and recommendation evaluations demonstrate CERec's ability to provide explanations consistent with user preferences and maintain improved recommendations. We release our code at https://github.com/Chrystalii/CERec.