菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - World Guidance: World Modeling in Condition Space for Action Generation

Leveraging future observation modeling to facilitate action generation presents a promising avenue for enhancing the capabilities of Vision-Language-Action (VLA) models. However, existing approaches struggle to strike a balance between maintaining efficient, predictable future representations and preserving sufficient fine-grained information to guide precise action generation. To address this limitation, we propose WoG (World Guidance), a framework that maps future observations into compact conditions by injecting them into the action inference pipeline. The VLA is then trained to simultaneously predict these compressed conditions alongside future actions, thereby achieving effective world modeling within the condition space for action inference. We demonstrate that modeling and predicting this condition space not only facilitates fine-grained action generation but also exhibits superior generalization capabilities. Moreover, it learns effectively from substantial human manipulation videos. Extensive experiments across both simulation and real-world environments validate that our method significantly outperforms existing methods based on future prediction. Project page is available at: this https URL

顶级标签: robotics multi-modal model training
详细标签: world modeling vision-language-action action generation future prediction human manipulation 或 搜索:

世界引导:在条件空间中为动作生成进行世界建模 / World Guidance: World Modeling in Condition Space for Action Generation


1️⃣ 一句话总结

这篇论文提出了一个名为‘世界引导’的新框架,它通过将预测的未来场景压缩成简洁的‘条件’,来更有效地指导AI模型生成精确的动作,从而在机器人和智能体控制任务上取得了比直接预测未来更好的效果。

源自 arXiv: 2602.22010