菜单

🤖 系统
📄 Abstract - UserRL: Training Interactive User-Centric Agent via Reinforcement Learning

Reinforcement learning (RL) has shown promise in training agentic models that move beyond static benchmarks to engage in dynamic, multi-turn interactions. Yet, the ultimate value of such agents lies in their ability to assist users, a setting where diversity and dynamics of user interaction pose challenges. In this work, we propose UserRL, a unified framework for training and evaluating user-centric abilities through standardized gym environments paired with simulated users. We systematically vary turn-level reward assignment and trajectory-level score calculation to analyze how different formulations affect learning under the GRPO algorithm. Our experiments across Qwen3 models reveal three key findings: (i) SFT cold start is critical for unlocking initial interaction ability and enabling sustained RL improvements; (ii) deliberate trajectory scoring yields more efficient and effective multi-turn interactions; and (iii) while stronger simulated users (e.g., GPT-4o) facilitates training, open-source simulators (e.g., Qwen3-32B) remain a cost-effective and transferable option. Together, these results highlight that careful design of reward shaping and user simulation choice is as crucial as model scale, and establish UserRL as a practical pathway for developing robust user-centric agentic models. All codes and data are public for future research.

顶级标签: reinforcement learning agents model training
详细标签: user-centric agents reward shaping multi-turn interaction simulated users rl training 或 搜索:

📄 论文总结

UserRL:通过强化学习训练交互式用户中心智能体 / UserRL: Training Interactive User-Centric Agent via Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一个名为UserRL的框架,通过强化学习和模拟用户环境来训练能更好地与用户交互的AI助手,并发现奖励机制设计和用户模拟器的选择对提升交互效果至关重要。


📄 打开原文 PDF