菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - SAVeS: Steering Safety Judgments in Vision-Language Models via Semantic Cues

Vision-language models (VLMs) are increasingly deployed in real-world and embodied settings where safety decisions depend on visual context. However, it remains unclear which visual evidence drives these judgments. We study whether multimodal safety behavior in VLMs can be steered by simple semantic cues. We introduce a semantic steering framework that applies controlled textual, visual, and cognitive interventions without changing the underlying scene content. To evaluate these effects, we propose SAVeS, a benchmark for situational safety under semantic cues, together with an evaluation protocol that separates behavioral refusal, grounded safety reasoning, and false refusals. Experiments across multiple VLMs and an additional state-of-the-art benchmark show that safety decisions are highly sensitive to semantic cues, indicating reliance on learned visual-linguistic associations rather than grounded visual understanding. We further demonstrate that automated steering pipelines can exploit these mechanisms, highlighting a potential vulnerability in multimodal safety systems.

顶级标签: multi-modal model evaluation computer vision
详细标签: vision-language models safety evaluation semantic steering benchmark vulnerability analysis 或 搜索:

SAVeS:通过语义线索引导视觉语言模型的安全判断 / SAVeS: Steering Safety Judgments in Vision-Language Models via Semantic Cues


1️⃣ 一句话总结

这篇论文研究发现,视觉语言模型的安全判断高度依赖于简单的语义线索而非对视觉内容的深入理解,并提出了一个评估基准来揭示和利用这一潜在的系统脆弱性。

源自 arXiv: 2603.19092