FlexRec:通过强化学习使基于大语言模型的推荐系统适应灵活需求 / FlexRec: Adapting LLM-based Recommenders for Flexible Needs via Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一个名为FlexRec的强化学习框架,它通过引入基于因果推理的细粒度奖励和考虑不确定性的奖励缩放机制,成功解决了传统推荐系统难以适应动态、多样化推荐目标的问题,显著提升了基于大语言模型的推荐系统在多种场景下的性能。
Modern recommender systems must adapt to dynamic, need-specific objectives for diverse recommendation scenarios, yet most traditional recommenders are optimized for a single static target and struggle to reconfigure behavior on demand. Recent advances in reinforcement-learning-based post-training have unlocked strong instruction-following and reasoning capabilities in LLMs, suggesting a principled route for aligning them to complex recommendation goals. Motivated by this, we study closed-set autoregressive ranking, where an LLM generates a permutation over a fixed candidate set conditioned on user context and an explicit need instruction. However, applying RL to this setting faces two key obstacles: (i) sequence-level rewards yield coarse credit assignment that fails to provide fine-grained training signals, and (ii) interaction feedback is sparse and noisy, which together lead to inefficient and unstable updates. We propose FlexRec, a post-training RL framework that addresses both issues with (1) a causally grounded item-level reward based on counterfactual swaps within the remaining candidate pool, and (2) critic-guided, uncertainty-aware scaling that explicitly models reward uncertainty and down-weights low-confidence rewards to stabilize learning under sparse supervision. Across diverse recommendation scenarios and objectives, FlexRec achieves substantial gains: it improves NDCG@5 by up to \textbf{59\%} and Recall@5 by up to \textbf{109.4\%} in need-specific ranking, and further achieves up to \textbf{24.1\%} Recall@5 improvement under generalization settings, outperforming strong traditional recommenders and LLM-based baselines.
FlexRec:通过强化学习使基于大语言模型的推荐系统适应灵活需求 / FlexRec: Adapting LLM-based Recommenders for Flexible Needs via Reinforcement Learning
这篇论文提出了一个名为FlexRec的强化学习框架,它通过引入基于因果推理的细粒度奖励和考虑不确定性的奖励缩放机制,成功解决了传统推荐系统难以适应动态、多样化推荐目标的问题,显著提升了基于大语言模型的推荐系统在多种场景下的性能。
源自 arXiv: 2603.11901