菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - DriveFine: Refining-Augmented Masked Diffusion VLA for Precise and Robust Driving

Vision-Language-Action (VLA) models for autonomous driving increasingly adopt generative planners trained with imitation learning followed by reinforcement learning. Diffusion-based planners suffer from modality alignment difficulties, low training efficiency, and limited generalization. Token-based planners are plagued by cumulative causal errors and irreversible decoding. In summary, the two dominant paradigms exhibit complementary strengths and weaknesses. In this paper, we propose DriveFine, a masked diffusion VLA model that combines flexible decoding with self-correction capabilities. In particular, we design a novel plug-and-play block-MoE, which seamlessly injects a refinement expert on top of the generation expert. By enabling explicit expert selection during inference and gradient blocking during training, the two experts are fully decoupled, preserving the foundational capabilities and generic patterns of the pretrained weights, which highlights the flexibility and extensibility of the block-MoE design. Furthermore, we design a hybrid reinforcement learning strategy that encourages effective exploration of refinement expert while maintaining training stability. Extensive experiments on NAVSIM v1, v2, and Navhard benchmarks demonstrate that DriveFine exhibits strong efficacy and robustness. The code will be released at this https URL.

顶级标签: agents robotics model training
详细标签: autonomous driving vision-language-action diffusion models reinforcement learning mixture of experts 或 搜索:

DriveFine:用于精确鲁棒驾驶的增强掩码扩散视觉语言动作模型 / DriveFine: Refining-Augmented Masked Diffusion VLA for Precise and Robust Driving


1️⃣ 一句话总结

这篇论文提出了一种名为DriveFine的新型自动驾驶规划模型,它巧妙地将扩散模型和基于令牌的模型的优势结合起来,通过一个创新的模块化专家设计,在生成驾驶动作的同时具备自我修正能力,从而在多个基准测试中实现了更精确、更鲁棒的驾驶性能。

源自 arXiv: 2602.14577