菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-11
📄 Abstract - GuardAD: Safeguarding Autonomous Driving MLLMs via Markovian Safety Logic

Multimodal large language models (MLLMs) are increasingly integrated into autonomous driving (AD) systems; however, they remain vulnerable to diverse safety threats, particularly in accident-prone scenarios. Recent safeguard mechanisms have shown promise by incorporating logical constraints, yet most rely on static formulations that lack temporally grounded safety reasoning over evolving traffic interactions, resulting in limited robustness in dynamic driving environments. To address these limitations, we propose GuardAD, a model-agnostic safeguard that formulates AD safety as an evolving Markovian logical state. GuardAD introduces Neuro-Symbolic Logic Formalization, which represents safety predicates over heterogeneous traffic participants and continuously induces them via n-th order Markovian Logic Induction. This design enables the inference of emerging and latent hazards beyond single-step observations. Rather than simply vetoing unsafe actions, GuardAD performs Logic-Driven Action Revision, where inferred safety states actively guide action refinement without modifying the underlying MLLM. Extensive experiments on multiple benchmarks and AD-MLLMs demonstrate that GuardAD substantially reduces accident rates (-32.07%) while slightly improving task performance (+6.85%). Moreover, closed-loop simulation evaluations, together with physical-world vehicle studies, further validate the effectiveness and potential of GuardAD.

顶级标签: multi-modal autonomous driving llm
详细标签: safety reasoning markovian logic neuro-symbolic action revision 或 搜索:

GuardAD:通过马尔可夫安全逻辑保护自动驾驶多模态大语言模型 / GuardAD: Safeguarding Autonomous Driving MLLMs via Markovian Safety Logic


1️⃣ 一句话总结

本文提出了GuardAD,一种不依赖具体模型的自动驾驶安全防护机制,通过将交通场景中的安全状态建模为随时间演变的马尔可夫逻辑,动态推断潜在危险并主动修正驾驶决策,从而在不改变底层模型的前提下显著降低事故率。

源自 arXiv: 2605.10386