📄 论文总结
通过预测强化行动策略 / Reinforcing Action Policies by Prophesying
1️⃣ 一句话总结
这篇论文提出了一种名为ProphRL的新方法,通过构建一个能够预测动作结果的视频模型和专门的强化学习技术,有效提升了视觉-语言-动作策略的适应性和成功率,无需依赖昂贵的真实机器人实验。
Vision-Language-Action (VLA) policies excel in aligning language, perception, and robot control. However, most VLAs are trained purely by imitation, which overfits to demonstrations, and is brittle under distribution shift. Reinforcement learning (RL) directly optimizes task reward and thus addresses this misalignment, but real-robot interaction is expensive and conventional simulators are hard to engineer and transfer. We address both data efficiency and optimization stability in VLA post-training via a learned world model and an RL procedure tailored to flow-based action heads. Specifically, we introduce Prophet, a unified action-to-video robot actuation pretrained across large-scale, heterogeneous robot data to learn reusable action-outcome dynamics. It is able to few-shot adapt to new robots, objects, and environments, yielding a rollout-ready simulator. Upon Prophet, we reinforce action policies with Flow-action-GRPO (FA-GRPO), which adapts Flow-GRPO to operate on VLA actions, and with FlowScale, a stepwise reweighting that rescales per-step gradients in the flow head. Together, Prophet, FA-GRPO, and FlowScale constitute ProphRL, a practical, data- and compute-efficient path to VLA post-training. Experiments show 5-17% success gains on public benchmarks and 24-30% gains on real robots across different VLA variants.
通过预测强化行动策略 / Reinforcing Action Policies by Prophesying
这篇论文提出了一种名为ProphRL的新方法,通过构建一个能够预测动作结果的视频模型和专门的强化学习技术,有效提升了视觉-语言-动作策略的适应性和成功率,无需依赖昂贵的真实机器人实验。