GoRL:一种算法无关的、使用生成策略的在线强化学习框架 / GoRL: An Algorithm-Agnostic Framework for Online Reinforcement Learning with Generative Policies
1️⃣ 一句话总结
这篇论文提出了一个名为GoRL的新框架,它通过将策略的优化过程与动作生成过程分离,巧妙地解决了强化学习中策略稳定性与表达力之间的固有矛盾,从而在复杂控制任务中实现了比传统高斯策略和现有生成策略更优且更稳定的性能。
Reinforcement learning (RL) faces a persistent tension: policies that are stable to optimize are often too simple to represent the multimodal action distributions needed for complex control. Gaussian policies provide tractable likelihoods and smooth gradients, but their unimodal form limits expressiveness. Conversely, generative policies based on diffusion or flow matching can model rich multimodal behaviors; however, in online RL, they are frequently unstable due to intractable likelihoods and noisy gradients propagating through deep sampling chains. We address this tension with a key structural principle: decoupling optimization from generation. Building on this insight, we introduce GoRL (Generative Online Reinforcement Learning), a framework that optimizes a tractable latent policy while utilizing a conditional generative decoder to synthesize actions. A two-timescale update schedule enables the latent policy to learn stably while the decoder steadily increases expressiveness, without requiring tractable action likelihoods. Across a range of continuous-control tasks, GoRL consistently outperforms both Gaussian policies and recent generative-policy baselines. Notably, on the HopperStand task, it reaches a normalized return above 870, more than 3 times that of the strongest baseline. These results demonstrate that separating optimization from generation provides a practical path to policies that are both stable and highly expressive.
GoRL:一种算法无关的、使用生成策略的在线强化学习框架 / GoRL: An Algorithm-Agnostic Framework for Online Reinforcement Learning with Generative Policies
这篇论文提出了一个名为GoRL的新框架,它通过将策略的优化过程与动作生成过程分离,巧妙地解决了强化学习中策略稳定性与表达力之间的固有矛盾,从而在复杂控制任务中实现了比传统高斯策略和现有生成策略更优且更稳定的性能。