菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Bridging Scene Generation and Planning: Driving with World Model via Unifying Vision and Motion Representation

End-to-end autonomous driving aims to generate safe and plausible planning policies from raw sensor input. Driving world models have shown great potential in learning rich representations by predicting the future evolution of a driving scene. However, existing driving world models primarily focus on visual scene representation, and motion representation is not explicitly designed to be planner-shared and inheritable, leaving a schism between the optimization of visual scene generation and the requirements of precise motion planning. We present WorldDrive, a holistic framework that couples scene generation and real-time planning via unifying vision and motion representation. We first introduce a Trajectory-aware Driving World Model, which conditions on a trajectory vocabulary to enforce consistency between visual dynamics and motion intentions, enabling the generation of diverse and plausible future scenes conditioned on a specific trajectory. We transfer the vision and motion encoders to a downstream Multi-modal Planner, ensuring the driving policy operates on mature representations pre-optimized by scene generation. A simple interaction between motion representation, visual representation, and ego status can generate high-quality, multi-modal trajectories. Furthermore, to exploit the world model's foresight, we propose a Future-aware Rewarder, which distills future latent representation from the frozen world model to evaluate and select optimal trajectories in real-time. Extensive experiments on the NAVSIM, NAVSIM-v2, and nuScenes benchmarks demonstrate that WorldDrive achieves leading planning performance among vision-only methods while maintaining high-fidelity action-controlled video generation capabilities, providing strong evidence for the effectiveness of unifying vision and motion representation for robust autonomous driving.

顶级标签: robotics computer vision agents
详细标签: autonomous driving world model motion planning scene generation end-to-end learning 或 搜索:

桥接场景生成与规划:通过统一视觉与运动表征实现基于世界模型的驾驶 / Bridging Scene Generation and Planning: Driving with World Model via Unifying Vision and Motion Representation


1️⃣ 一句话总结

这篇论文提出了一个名为WorldDrive的自动驾驶框架,它通过统一视觉和运动表征,将预测未来场景的生成模型与实时运动规划紧密结合,从而在保证高质量视频生成的同时,实现了领先的纯视觉规划性能。

源自 arXiv: 2603.14948