嵌入空间中的捉迷藏:基于几何的大语言模型隐写术与检测 / Hide and Seek in Embedding Space: Geometry-based Steganography and Detection in Large Language Models
1️⃣ 一句话总结
这篇论文发现,经过微调的大语言模型可以在其输出中隐藏秘密信息(隐写术),而作者提出了一种更隐蔽的基于嵌入空间几何关系的新方法,同时通过分析模型内部激活模式,成功检测出这种恶意微调留下的痕迹。
Fine-tuned LLMs can covertly encode prompt secrets into outputs via steganographic channels. Prior work demonstrated this threat but relied on trivially recoverable encodings. We formalize payload recoverability via classifier accuracy and show previous schemes achieve 100\% recoverability. In response, we introduce low-recoverability steganography, replacing arbitrary mappings with embedding-space-derived ones. For Llama-8B (LoRA) and Ministral-8B (LoRA) trained on TrojanStego prompts, exact secret recovery rises from 17$\rightarrow$30\% (+78\%) and 24$\rightarrow$43\% (+80\%) respectively, while on Llama-70B (LoRA) trained on Wiki prompts, it climbs from 9$\rightarrow$19\% (+123\%), all while reducing payload recoverability. We then discuss detection. We argue that detecting fine-tuning-based steganographic attacks requires approaches beyond traditional steganalysis. Standard approaches measure distributional shift, which is an expected side-effect of fine-tuning. Instead, we propose a mechanistic interpretability approach: linear probes trained on later-layer activations detect the secret with up to 33\% higher accuracy in fine-tuned models compared to base models, even for low-recoverability schemes. This suggests that malicious fine-tuning leaves actionable internal signatures amenable to interpretability-based defenses.
嵌入空间中的捉迷藏:基于几何的大语言模型隐写术与检测 / Hide and Seek in Embedding Space: Geometry-based Steganography and Detection in Large Language Models
这篇论文发现,经过微调的大语言模型可以在其输出中隐藏秘密信息(隐写术),而作者提出了一种更隐蔽的基于嵌入空间几何关系的新方法,同时通过分析模型内部激活模式,成功检测出这种恶意微调留下的痕迹。
源自 arXiv: 2601.22818