菜单

🤖 系统
📄 Abstract - In-Context Representation Hijacking

We introduce $\textbf{Doublespeak}$, a simple in-context representation hijacking attack against large language models (LLMs). The attack works by systematically replacing a harmful keyword (e.g., bomb) with a benign token (e.g., carrot) across multiple in-context examples, provided a prefix to a harmful request. We demonstrate that this substitution leads to the internal representation of the benign token converging toward that of the harmful one, effectively embedding the harmful semantics under a euphemism. As a result, superficially innocuous prompts (e.g., "How to build a carrot?") are internally interpreted as disallowed instructions (e.g., "How to build a bomb?"), thereby bypassing the model's safety alignment. We use interpretability tools to show that this semantic overwrite emerges layer by layer, with benign meanings in early layers converging into harmful semantics in later ones. Doublespeak is optimization-free, broadly transferable across model families, and achieves strong success rates on closed-source and open-source systems, reaching 74% ASR on Llama-3.3-70B-Instruct with a single-sentence context override. Our findings highlight a new attack surface in the latent space of LLMs, revealing that current alignment strategies are insufficient and should instead operate at the representation level.

顶级标签: llm model evaluation theory
详细标签: adversarial attack safety alignment representation hijacking in-context learning interpretability 或 搜索:

上下文表示劫持 / In-Context Representation Hijacking


1️⃣ 一句话总结

这篇论文提出了一种名为‘Doublespeak’的简单攻击方法,通过在多轮对话示例中系统性地将有害词汇(如‘炸弹’)替换为无害词汇(如‘胡萝卜’),使得大语言模型在内部将无害词汇的语义理解为有害内容,从而绕过模型的安全防护机制。


📄 打开原文 PDF