迈向实用的基于世界模型的视觉-语言-动作模型强化学习 / Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models
1️⃣ 一句话总结
这篇论文提出了一个名为VLA-MBPO的实用框架,通过利用统一多模态模型进行高效的世界建模、增强多视图一致性以及减少误差累积,显著提升了视觉-语言-动作机器人在强化学习训练中的性能和样本效率,同时避免了真实世界交互的高成本与安全风险。
Vision-Language-Action (VLA) models show strong generalization for robotic control, but finetuning them with reinforcement learning (RL) is constrained by the high cost and safety risks of real-world interaction. Training VLA models in interactive world models avoids these issues but introduces several challenges, including pixel-level world modeling, multi-view consistency, and compounding errors under sparse rewards. Building on recent advances across large multimodal models and model-based RL, we propose VLA-MBPO, a practical framework to tackle these problems in VLA finetuning. Our approach has three key design choices: (i) adapting unified multimodal models (UMMs) for data-efficient world modeling; (ii) an interleaved view decoding mechanism to enforce multi-view consistency; and (iii) chunk-level branched rollout to mitigate error compounding. Theoretical analysis and experiments across simulation and real-world tasks demonstrate that VLA-MBPO significantly improves policy performance and sample efficiency, underscoring its robustness and scalability for real-world robotic deployment.
迈向实用的基于世界模型的视觉-语言-动作模型强化学习 / Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models
这篇论文提出了一个名为VLA-MBPO的实用框架,通过利用统一多模态模型进行高效的世界建模、增强多视图一致性以及减少误差累积,显著提升了视觉-语言-动作机器人在强化学习训练中的性能和样本效率,同时避免了真实世界交互的高成本与安全风险。
源自 arXiv: 2603.20607