GUARD-SLM:基于令牌激活的防御方法,用于保护小型语言模型免受越狱攻击 / GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models
1️⃣ 一句话总结
这项研究发现小型语言模型容易受到恶意提示攻击,并提出了一种名为GUARD-SLM的轻量级防御方法,通过分析模型内部激活模式来实时过滤有害输入,从而保障模型安全部署。
Small Language Models (SLMs) are emerging as efficient and economically viable alternatives to Large Language Models (LLMs), offering competitive performance with significantly lower computational costs and latency. These advantages make SLMs suitable for resource-constrained and efficient deployment on edge devices. However, existing jailbreak defenses show limited robustness against heterogeneous attacks, largely due to an incomplete understanding of the internal representations across different layers of language models that facilitate jailbreak behaviors. In this paper, we conduct a comprehensive empirical study on 9 jailbreak attacks across 7 SLMs and 3 LLMs. Our analysis shows that SLMs remain highly vulnerable to malicious prompts that bypass safety alignment. We analyze hidden-layer activations across different layers and model architectures, revealing that different input types form distinguishable patterns in the internal representation space. Based on this observation, we propose GUARD-SLM, a lightweight token activation-based method that operates in the representation space to filter malicious prompts during inference while preserving benign ones. Our findings highlight robustness limitations across layers of language models and provide a practical direction for secure small language model deployment.
GUARD-SLM:基于令牌激活的防御方法,用于保护小型语言模型免受越狱攻击 / GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models
这项研究发现小型语言模型容易受到恶意提示攻击,并提出了一种名为GUARD-SLM的轻量级防御方法,通过分析模型内部激活模式来实时过滤有害输入,从而保障模型安全部署。
源自 arXiv: 2603.28817