用于鲁棒视觉语言模型的方向性嵌入平滑方法 / Directional Embedding Smoothing for Robust Vision Language Models
1️⃣ 一句话总结
这项研究提出了一种名为RESTA的轻量级防御方法,通过在模型推理时向嵌入向量中注入特定方向的噪声,有效降低了多种多模态越狱攻击的成功率,从而增强了视觉语言模型的安全性和可靠性。
The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain vulnerable to jailbreaking attacks that undermine their safety alignment to yield harmful outputs. In this work, we extend the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense to VLMs and evaluate its performance against the JailBreakV-28K benchmark of multi-modal jailbreaking attacks. We find that RESTA is effective in reducing attack success rate over this diverse corpus of attacks, in particular, when employing directional embedding noise, where the injected noise is aligned with the original token embedding vectors. Our results demonstrate that RESTA can contribute to securing VLMs within agentic systems, as a lightweight, inference-time defense layer of an overall security framework.
用于鲁棒视觉语言模型的方向性嵌入平滑方法 / Directional Embedding Smoothing for Robust Vision Language Models
这项研究提出了一种名为RESTA的轻量级防御方法,通过在模型推理时向嵌入向量中注入特定方向的噪声,有效降低了多种多模态越狱攻击的成功率,从而增强了视觉语言模型的安全性和可靠性。
源自 arXiv: 2603.15259