菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-18
📄 Abstract - Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future

Autonomous driving has long relied on modular "Perception-Decision-Action" pipelines, where hand-crafted interfaces and rule-based components often break down in complex or long-tailed scenarios. Their cascaded design further propagates perception errors, degrading downstream planning and control. Vision-Action (VA) models address some limitations by learning direct mappings from visual inputs to actions, but they remain opaque, sensitive to distribution shifts, and lack structured reasoning or instruction-following capabilities. Recent progress in Large Language Models (LLMs) and multimodal learning has motivated the emergence of Vision-Language-Action (VLA) frameworks, which integrate perception with language-grounded decision making. By unifying visual understanding, linguistic reasoning, and actionable outputs, VLAs offer a pathway toward more interpretable, generalizable, and human-aligned driving policies. This work provides a structured characterization of the emerging VLA landscape for autonomous driving. We trace the evolution from early VA approaches to modern VLA frameworks and organize existing methods into two principal paradigms: End-to-End VLA, which integrates perception, reasoning, and planning within a single model, and Dual-System VLA, which separates slow deliberation (via VLMs) from fast, safety-critical execution (via planners). Within these paradigms, we further distinguish subclasses such as textual vs. numerical action generators and explicit vs. implicit guidance mechanisms. We also summarize representative datasets and benchmarks for evaluating VLA-based driving systems and highlight key challenges and open directions, including robustness, interpretability, and instruction fidelity. Overall, this work aims to establish a coherent foundation for advancing human-compatible autonomous driving systems.

顶级标签: multi-modal agents systems
详细标签: autonomous driving vision-language-action decision making planning benchmark 或 搜索:

自动驾驶中的视觉-语言-动作模型:过去、现在与未来 / Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future


1️⃣ 一句话总结

这篇论文系统梳理了自动驾驶技术从传统模块化框架到新型视觉-语言-动作(VLA)模型的发展历程,指出VLA模型通过整合视觉感知、语言推理和动作生成,为实现更可解释、更通用且更符合人类意图的自动驾驶系统提供了新方向。


源自 arXiv: 2512.16760