香肠切片威胁:利用LLM系统中的累积风险 / The Salami Slicing Threat: Exploiting Cumulative Risks in LLM Systems
1️⃣ 一句话总结
这篇论文提出了一种名为‘香肠切片攻击’的新型多轮越狱方法,通过串联大量看似无害的对话,逐步累积恶意意图来绕过大语言模型的安全防护,并在多种主流模型上取得了极高的攻击成功率,同时论文也提出了相应的防御策略。
Large Language Models (LLMs) face prominent security risks from jailbreaking, a practice that manipulates models to bypass built-in security constraints and generate unethical or unsafe content. Among various jailbreak techniques, multi-turn jailbreak attacks are more covert and persistent than single-turn counterparts, exposing critical vulnerabilities of LLMs. However, existing multi-turn jailbreak methods suffer from two fundamental limitations that affect the actual impact in real-world scenarios: (a) As models become more context-aware, any explicit harmful trigger is increasingly likely to be flagged and blocked; (b) Successful final-step triggers often require finely tuned, model-specific contexts, making such attacks highly context-dependent. To fill this gap, we propose \textit{Salami Slicing Risk}, which operates by chaining numerous low-risk inputs that individually evade alignment thresholds but cumulatively accumulate harmful intent to ultimately trigger high-risk behaviors, without heavy reliance on pre-designed contextual structures. Building on this risk, we develop Salami Attack, an automatic framework universally applicable to multiple model types and modalities. Rigorous experiments demonstrate its state-of-the-art performance across diverse models and modalities, achieving over 90\% Attack Success Rate on GPT-4o and Gemini, as well as robustness against real-world alignment defenses. We also proposed a defense strategy to constrain the Salami Attack by at least 44.8\% while achieving a maximum blocking rate of 64.8\% against other multi-turn jailbreak attacks. Our findings provide critical insights into the pervasive risks of multi-turn jailbreaking and offer actionable mitigation strategies to enhance LLM security.
香肠切片威胁:利用LLM系统中的累积风险 / The Salami Slicing Threat: Exploiting Cumulative Risks in LLM Systems
这篇论文提出了一种名为‘香肠切片攻击’的新型多轮越狱方法,通过串联大量看似无害的对话,逐步累积恶意意图来绕过大语言模型的安全防护,并在多种主流模型上取得了极高的攻击成功率,同时论文也提出了相应的防御策略。
源自 arXiv: 2604.11309