菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-30
📄 Abstract - Minimal, Local, Causal Explanations for Jailbreak Success in Large Language Models

Safety trained large language models (LLMs) can often be induced to answer harmful requests through jailbreak prompts. Because we lack a robust understanding of why LLMs are susceptible to jailbreaks, future frontier models operating more autonomously in higher-stakes settings may similarly be vulnerable to such attacks. Prior work has studied jailbreak success by examining the model's intermediate representations, identifying directions in this space that causally encode concepts like harmfulness and refusal. Then, they globally explain all jailbreak attacks as attempting to reduce or strengthen these concepts (e.g., reduce harmfulness). However, different jailbreak strategies may succeed by strengthening or suppressing different intermediate concepts, and the same jailbreak strategy may not work for different harmful request categories (e.g., violence vs. cyberattack); thus, we seek to give a local explanation -- i.e., why did this specific jailbreak succeed? To address this gap, we introduce LOCA, a method that gives Local, CAusal explanations of jailbreak success by identifying a minimal set of interpretable, intermediate representation changes that causally induce model refusal on an otherwise successful jailbreak request. We evaluate LOCA on harmful original-jailbreak pairs from a large jailbreak benchmark across Gemma and Llama chat models, comparing against prior methods adapted to this setting. LOCA can successfully induce refusal by making, on average, six interpretable changes; prior work routinely fails to achieve refusal even after 20 changes. LOCA is a step toward mechanistic, local explanations of jailbreak success in LLMs. Code to be released.

顶级标签: llm model evaluation machine learning
详细标签: jailbreak detection causal explanation safety interpretability refusal mechanisms 或 搜索:

大型语言模型越狱成功的极简、局部与因果解释 / Minimal, Local, Causal Explanations for Jailbreak Success in Large Language Models


1️⃣ 一句话总结

本文提出一种名为LOCA的新方法,能够通过定位并少量修改模型内部的关键表示方向,精准解释为何某个特定的越狱攻击能成功绕过安全限制,从而为理解不同攻击策略的底层机制提供了局部、因果性的分析工具。

源自 arXiv: 2605.00123