菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - ReRec: Reasoning-Augmented LLM-based Recommendation Assistant via Reinforcement Fine-tuning

With the rise of LLMs, there is an increasing need for intelligent recommendation assistants that can handle complex queries and provide personalized, reasoning-driven recommendations. LLM-based recommenders show potential but face challenges in multi-step reasoning, underscoring the need for reasoning-augmented systems. To address this gap, we propose ReRec, a novel reinforcement fine-tuning (RFT) framework designed to improve LLM reasoning in complex recommendation tasks. Our framework introduces three key components: (1) Dual-Graph Enhanced Reward Shaping, integrating recommendation metrics like NDCG@K with Query Alignment and Preference Alignment Scores to provide fine-grained reward signals for LLM optimization; (2) Reasoning-aware Advantage Estimation, which decomposes LLM outputs into reasoning segments and penalizes incorrect steps to enhance reasoning of recommendation; and (3) Online Curriculum Scheduler, dynamically assess query difficulty and organize training curriculum to ensure stable learning during RFT. Experiments demonstrate that ReRec outperforms state-of-the-art baselines and preserves core abilities like instruction-following and general knowledge. Our codes are available at this https URL.

顶级标签: llm agents model training
详细标签: recommendation systems reinforcement learning reasoning fine-tuning evaluation 或 搜索:

ReRec:通过强化微调实现的、基于大语言模型的推理增强推荐助手 / ReRec: Reasoning-Augmented LLM-based Recommendation Assistant via Reinforcement Fine-tuning


1️⃣ 一句话总结

这篇论文提出了一个名为ReRec的新框架,它通过一种结合了精细奖励设计和动态学习规划的强化学习方法,来训练大语言模型,使其在复杂的推荐任务中能像人一样进行多步骤推理,从而提供更准确、更个性化的推荐。

源自 arXiv: 2604.07851