📄 论文总结
通过经验合成扩展智能体学习 / Scaling Agent Learning via Experience Synthesis
1️⃣ 一句话总结
这篇论文提出了一个名为DreamGym的框架,它通过合成多样化的虚拟经验数据来高效训练强化学习智能体,从而克服了传统方法依赖真实环境交互成本高、任务单一等难题,并在多种测试中显著提升了训练效果和实际应用性能。
While reinforcement learning (RL) can empower autonomous agents by enabling self-improvement through interaction, its practical adoption remains challenging due to costly rollouts, limited task diversity, unreliable reward signals, and infrastructure complexity, all of which obstruct the collection of scalable experience data. To address these challenges, we introduce DreamGym, the first unified framework designed to synthesize diverse experiences with scalability in mind to enable effective online RL training for autonomous agents. Rather than relying on expensive real-environment rollouts, DreamGym distills environment dynamics into a reasoning-based experience model that derives consistent state transitions and feedback signals through step-by-step reasoning, enabling scalable agent rollout collection for RL. To improve the stability and quality of transitions, DreamGym leverages an experience replay buffer initialized with offline real-world data and continuously enriched with fresh interactions to actively support agent training. To improve knowledge acquisition, DreamGym adaptively generates new tasks that challenge the current agent policy, enabling more effective online curriculum learning. Experiments across diverse environments and agent backbones demonstrate that DreamGym substantially improves RL training, both in fully synthetic settings and in sim-to-real transfer scenarios. On non-RL-ready tasks like WebArena, DreamGym outperforms all baselines by over 30%. And in RL-ready but costly settings, it matches GRPO and PPO performance using only synthetic interactions. When transferring a policy trained purely on synthetic experiences to real-environment RL, DreamGym yields significant additional performance gains while requiring far fewer real-world interactions, providing a scalable warm-start strategy for general-purpose RL.
通过经验合成扩展智能体学习 / Scaling Agent Learning via Experience Synthesis
这篇论文提出了一个名为DreamGym的框架,它通过合成多样化的虚拟经验数据来高效训练强化学习智能体,从而克服了传统方法依赖真实环境交互成本高、任务单一等难题,并在多种测试中显著提升了训练效果和实际应用性能。