菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-31
📄 Abstract - Robust Multimodal Safety via Conditional Decoding

Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.

顶级标签: multi-modal llm model evaluation
详细标签: safety alignment conditional decoding adversarial robustness multimodal attacks internal representations 或 搜索:

基于条件解码的鲁棒多模态安全防护 / Robust Multimodal Safety via Conditional Decoding


1️⃣ 一句话总结

这篇论文提出了一种名为CASA的简单条件解码策略,通过让多模态大模型在生成回复前先预测一个安全标记,有效抵御了利用跨模态交互发起的恶意攻击,在多种测试中将攻击成功率平均降低了97%以上,同时不影响正常任务的处理能力。

源自 arXiv: 2604.00310