ICON:基于推理时校正的智能体间接提示注入防御框架 / ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction
1️⃣ 一句话总结
本文提出了一种名为ICON的新型防御框架,它能在不中断智能体正常工作流程的前提下,通过探测并修正模型内部注意力机制,有效抵御间接提示注入攻击,在保障安全的同时大幅提升了任务执行效率。
Large Language Model (LLM) agents are susceptible to Indirect Prompt Injection (IPI) attacks, where malicious instructions in retrieved content hijack the agent's execution. Existing defenses typically rely on strict filtering or refusal mechanisms, which suffer from a critical limitation: over-refusal, prematurely terminating valid agentic workflows. We propose ICON, a probing-to-mitigation framework that neutralizes attacks while preserving task continuity. Our key insight is that IPI attacks leave distinct over-focusing signatures in the latent space. We introduce a Latent Space Trace Prober to detect attacks based on high intensity scores. Subsequently, a Mitigating Rectifier performs surgical attention steering that selectively manipulate adversarial query key dependencies while amplifying task relevant elements to restore the LLM's functional trajectory. Extensive evaluations on multiple backbones show that ICON achieves a competitive 0.4% ASR, matching commercial grade detectors, while yielding a over 50% task utility gain. Furthermore, ICON demonstrates robust Out of Distribution(OOD) generalization and extends effectively to multi-modal agents, establishing a superior balance between security and efficiency.
ICON:基于推理时校正的智能体间接提示注入防御框架 / ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction
本文提出了一种名为ICON的新型防御框架,它能在不中断智能体正常工作流程的前提下,通过探测并修正模型内部注意力机制,有效抵御间接提示注入攻击,在保障安全的同时大幅提升了任务执行效率。
源自 arXiv: 2602.20708