📄
Abstract - Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation
LLM-based explainable recommenders can produce fluent explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences. Such preference-inconsistent explanations yield logically valid but unconvincing reasoning and are largely missed by standard hallucination or faithfulness metrics. We formalize this failure mode and propose PURE, a preference-aware reasoning framework following a select-then-generate paradigm. Instead of only improving generation, PURE intervenes in evidence selection, it selects a compact set of multi-hop item-centric reasoning paths that are both factually grounded and aligned with user preference structure, guided by user intent, specificity, and diversity to suppress generic, weakly personalized evidence. The selected evidence is then injected into LLM generation via structure-aware prompting that preserves relational constraints. To measure preference inconsistency, we introduce a feature-level, user-centric evaluation metric that reveals misalignment overlooked by factuality-based measures. Experiments on three real-world datasets show that PURE consistently reduces preference-inconsistent explanations and factual hallucinations while maintaining competitive recommendation accuracy, explanation quality, and inference efficiency. These results highlight that trustworthy explanations require not only factual correctness but also justification aligned with user preferences.
超越事实准确性:在可解释推荐中缓解偏好不一致的解释 /
Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation
1️⃣ 一句话总结
这篇论文提出了一种名为PURE的新方法,旨在解决AI推荐系统生成解释时的一个关键问题:即使解释本身事实正确,但如果其理由与用户的历史偏好相冲突,也会显得缺乏说服力;该方法通过优先选择与用户偏好一致的证据来生成解释,从而在保持推荐准确性的同时,让解释更可信、更个性化。