RLAnything:在完全动态的强化学习系统中锻造环境、策略和奖励模型 / RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System
1️⃣ 一句话总结
这篇论文提出了一个名为RLAnything的强化学习框架,它通过闭环优化动态调整环境、策略和奖励模型,从而显著提升大语言模型或智能体在各种任务中的表现。
We propose RLAnything, a reinforcement learning framework that dynamically forges environment, policy, and reward models through closed-loop optimization, amplifying learning signals and strengthening the overall RL system for any LLM or agentic scenarios. Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience. Empirically, each added component consistently improves the overall system, and RLAnything yields substantial gains across various representative LLM and agentic tasks, boosting Qwen3-VL-8B-Thinking by 9.1% on OSWorld and Qwen2.5-7B-Instruct by 18.7% and 11.9% on AlfWorld and LiveBench, respectively. We also that optimized reward-model signals outperform outcomes that rely on human labels. Code: this https URL
RLAnything:在完全动态的强化学习系统中锻造环境、策略和奖励模型 / RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System
这篇论文提出了一个名为RLAnything的强化学习框架,它通过闭环优化动态调整环境、策略和奖励模型,从而显著提升大语言模型或智能体在各种任务中的表现。
源自 arXiv: 2602.02488