菜单

🤖 系统
📄 Abstract - WMPO: World Model-based Policy Optimization for Vision-Language-Action Models

Vision-Language-Action (VLA) models have shown strong potential for general-purpose robotic manipulation, but their reliance on expert demonstrations limits their ability to learn from failures and perform self-corrections. Reinforcement learning (RL) addresses these through self-improving interactions with the physical environment, but suffers from high sample complexity on real robots. We introduce World-Model-based Policy Optimization (WMPO), a principled framework for on-policy VLA RL without interacting with the real environment. In contrast to widely used latent world models, WMPO focuses on pixel-based predictions that align the "imagined" trajectories with the VLA features pretrained with web-scale images. Crucially, WMPO enables the policy to perform on-policy GRPO that provides stronger performance than the often-used off-policy methods. Extensive experiments in both simulation and real-robot settings demonstrate that WMPO (i) substantially improves sample efficiency, (ii) achieves stronger overall performance, (iii) exhibits emergent behaviors such as self-correction, and (iv) demonstrates robust generalization and lifelong learning capabilities.

顶级标签: robotics multi-modal reinforcement learning
详细标签: world models vision-language-action policy optimization sample efficiency robot manipulation 或 搜索:

📄 论文总结

WMPO:基于世界模型的视觉-语言-动作模型策略优化 / WMPO: World Model-based Policy Optimization for Vision-Language-Action Models


1️⃣ 一句话总结

这项研究提出了一种名为WMPO的新方法,让机器人能够通过内部模拟学习改进自身动作,无需在真实环境中反复试错,从而更高效地掌握复杂操作技能并具备自我纠错能力。


📄 打开原文 PDF