通过掩码视觉-语言-动作扩散实现高效且可解释的端到端自动驾驶 / Efficient and Explainable End-to-End Autonomous Driving via Masked Vision-Language-Action Diffusion
1️⃣ 一句话总结
这篇论文提出了一种名为MVLAD-AD的新框架,它通过一种创新的掩码扩散模型,将驾驶场景的视觉和语言理解与精确的轨迹规划相结合,从而在保证高效运行和行动精度的同时,提供了清晰、可解释的决策过程。
Large Language Models (LLMs) and Vision-Language Models (VLMs) have emerged as promising candidates for end-to-end autonomous driving. However, these models typically face challenges in inference latency, action precision, and explainability. Existing autoregressive approaches struggle with slow token-by-token generation, while prior diffusion-based planners often rely on verbose, general-purpose language tokens that lack explicit geometric structure. In this work, we propose Masked Vision-Language-Action Diffusion for Autonomous Driving (MVLAD-AD), a novel framework designed to bridge the gap between efficient planning and semantic explainability via a masked vision-language-action diffusion model. Unlike methods that force actions into the language space, we introduce a discrete action tokenization strategy that constructs a compact codebook of kinematically feasible waypoints from real-world driving distributions. Moreover, we propose geometry-aware embedding learning to ensure that embeddings in the latent space approximate physical geometric metrics. Finally, an action-priority decoding strategy is introduced to prioritize trajectory generation. Extensive experiments on nuScenes and derived benchmarks demonstrate that MVLAD-AD achieves superior efficiency and outperforms state-of-the-art autoregressive and diffusion baselines in planning precision, while providing high-fidelity and explainable reasoning.
通过掩码视觉-语言-动作扩散实现高效且可解释的端到端自动驾驶 / Efficient and Explainable End-to-End Autonomous Driving via Masked Vision-Language-Action Diffusion
这篇论文提出了一种名为MVLAD-AD的新框架,它通过一种创新的掩码扩散模型,将驾驶场景的视觉和语言理解与精确的轨迹规划相结合,从而在保证高效运行和行动精度的同时,提供了清晰、可解释的决策过程。
源自 arXiv: 2602.20577