菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-07
📄 Abstract - ICR-Drive: Instruction Counterfactual Robustness for End-to-End Language-Driven Autonomous Driving

Recent progress in vision-language-action (VLA) models has enabled language-conditioned driving agents to execute natural-language navigation commands in closed-loop simulation, yet standard evaluations largely assume instructions are precise and well-formed. In deployment, instructions vary in phrasing and specificity, may omit critical qualifiers, and can occasionally include misleading, authority-framed text, leaving instruction-level robustness under-measured. We introduce ICR-Drive, a diagnostic framework for instruction counterfactual robustness in end-to-end language-conditioned autonomous driving. ICR-Drive generates controlled instruction variants spanning four perturbation families: Paraphrase, Ambiguity, Noise, and Misleading, where Misleading variants conflict with the navigation goal and attempt to override intent. We replay identical CARLA routes under matched simulator configurations and seeds to isolate performance changes attributable to instruction language. Robustness is quantified using standard CARLA Leaderboard metrics and per-family performance degradation relative to the baseline instruction. Experiments on LMDrive and BEVDriver show that minor instruction changes can induce substantial performance drops and distinct failure modes, revealing a reliability gap for deploying embodied foundation models in safety-critical driving.

顶级标签: agents natural language processing robotics
详细标签: autonomous driving vision-language-action robustness evaluation counterfactual analysis instruction following 或 搜索:

ICR-Drive:面向端到端语言驱动自动驾驶的指令反事实鲁棒性框架 / ICR-Drive: Instruction Counterfactual Robustness for End-to-End Language-Driven Autonomous Driving


1️⃣ 一句话总结

这篇论文提出了一个名为ICR-Drive的诊断框架,用于测试和评估语言驱动自动驾驶系统在面对指令表述变化(如同义改写、模糊、噪声或误导性指令)时的鲁棒性,揭示了当前模型在安全性上的潜在风险。

源自 arXiv: 2604.05378