GateBreaker:针对专家混合大语言模型的基于门控的对抗攻击 / GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs
1️⃣ 一句话总结
这篇论文首次提出了一种名为GateBreaker的免训练、轻量级攻击方法,它通过分析并精准关闭专家混合大模型中负责安全防护的少量关键神经元,就能有效绕过多种最新模型的安全对齐机制,使其产生有害内容,揭示了这类模型独特的安全脆弱性。
Mixture-of-Experts (MoE) architectures have advanced the scaling of Large Language Models (LLMs) by activating only a sparse subset of parameters per input, enabling state-of-the-art performance with reduced computational cost. As these models are increasingly deployed in critical domains, understanding and strengthening their alignment mechanisms is essential to prevent harmful outputs. However, existing LLM safety research has focused almost exclusively on dense architectures, leaving the unique safety properties of MoEs largely unexamined. The modular, sparsely-activated design of MoEs suggests that safety mechanisms may operate differently than in dense models, raising questions about their robustness. In this paper, we present GateBreaker, the first training-free, lightweight, and architecture-agnostic attack framework that compromises the safety alignment of modern MoE LLMs at inference time. GateBreaker operates in three stages: (i) gate-level profiling, which identifies safety experts disproportionately routed on harmful inputs, (ii) expert-level localization, which localizes the safety structure within safety experts, and (iii) targeted safety removal, which disables the identified safety structure to compromise the safety alignment. Our study shows that MoE safety concentrates within a small subset of neurons coordinated by sparse routing. Selective disabling of these neurons, approximately 3% of neurons in the targeted expert layers, significantly increases the averaged attack success rate (ASR) from 7.4% to 64.9% against the eight latest aligned MoE LLMs with limited utility degradation. These safety neurons transfer across models within the same family, raising ASR from 17.9% to 67.7% with one-shot transfer attack. Furthermore, GateBreaker generalizes to five MoE vision language models (VLMs) with 60.9% ASR on unsafe image inputs.
GateBreaker:针对专家混合大语言模型的基于门控的对抗攻击 / GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs
这篇论文首次提出了一种名为GateBreaker的免训练、轻量级攻击方法,它通过分析并精准关闭专家混合大模型中负责安全防护的少量关键神经元,就能有效绕过多种最新模型的安全对齐机制,使其产生有害内容,揭示了这类模型独特的安全脆弱性。
源自 arXiv: 2512.21008