LLM增强的强化学习在交互式推荐中提升长期用户满意度 / LLM-Enhanced Reinforcement Learning for Long-Term User Satisfaction in Interactive Recommendation
1️⃣ 一句话总结
这篇论文提出了一种结合大语言模型和强化学习的新方法,通过让大模型规划多样化的内容类别、强化学习负责具体推荐的分层设计,有效解决了传统交互推荐系统内容单一、忽视用户兴趣长期变化的问题,从而显著提升了用户的长期满意度。
Interactive recommender systems can dynamically adapt to user feedback, but often suffer from content homogeneity and filter bubble effects due to overfitting short-term user preferences. While recent efforts aim to improve content diversity, they predominantly operate in static or one-shot settings, neglecting the long-term evolution of user interests. Reinforcement learning provides a principled framework for optimizing long-term user satisfaction by modeling sequential decision-making processes. However, its application in recommendation is hindered by sparse, long-tailed user-item interactions and limited semantic planning capabilities. In this work, we propose LLM-Enhanced Reinforcement Learning (LERL), a novel hierarchical recommendation framework that integrates the semantic planning power of LLM with the fine-grained adaptability of RL. LERL consists of a high-level LLM-based planner that selects semantically diverse content categories, and a low-level RL policy that recommends personalized items within the selected semantic space. This hierarchical design narrows the action space, enhances planning efficiency, and mitigates overexposure to redundant content. Extensive experiments on real-world datasets demonstrate that LERL significantly improves long-term user satisfaction when compared with state-of-the-art baselines. The implementation of LERL is available at this https URL.
LLM增强的强化学习在交互式推荐中提升长期用户满意度 / LLM-Enhanced Reinforcement Learning for Long-Term User Satisfaction in Interactive Recommendation
这篇论文提出了一种结合大语言模型和强化学习的新方法,通过让大模型规划多样化的内容类别、强化学习负责具体推荐的分层设计,有效解决了传统交互推荐系统内容单一、忽视用户兴趣长期变化的问题,从而显著提升了用户的长期满意度。
源自 arXiv: 2601.19585