菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-26
📄 Abstract - AgentSentry: Mitigating Indirect Prompt Injection in LLM Agents via Temporal Causal Diagnostics and Context Purification

Large language model (LLM) agents increasingly rely on external tools and retrieval systems to autonomously complete complex tasks. However, this design exposes agents to indirect prompt injection (IPI), where attacker-controlled context embedded in tool outputs or retrieved content silently steers agent actions away from user intent. Unlike prompt-based attacks, IPI unfolds over multi-turn trajectories, making malicious control difficult to disentangle from legitimate task execution. Existing inference-time defenses primarily rely on heuristic detection and conservative blocking of high-risk actions, which can prematurely terminate workflows or broadly suppress tool usage under ambiguous multi-turn scenarios. We propose AgentSentry, a novel inference-time detection and mitigation framework for tool-augmented LLM agents. To the best of our knowledge, AgentSentry is the first inference-time defense to model multi-turn IPI as a temporal causal takeover. It localizes takeover points via controlled counterfactual re-executions at tool-return boundaries and enables safe continuation through causally guided context purification that removes attack-induced deviations while preserving task-relevant evidence. We evaluate AgentSentry on the \textsc{AgentDojo} benchmark across four task suites, three IPI attack families, and multiple black-box LLMs. AgentSentry eliminates successful attacks and maintains strong utility under attack, achieving an average Utility Under Attack (UA) of 74.55 %, improving UA by 20.8 to 33.6 percentage points over the strongest baselines without degrading benign performance.

顶级标签: llm agents systems
详细标签: prompt injection agent security temporal causal modeling context purification inference-time defense 或 搜索:

AgentSentry:通过时序因果诊断与上下文净化缓解大语言模型智能体中的间接提示注入攻击 / AgentSentry: Mitigating Indirect Prompt Injection in LLM Agents via Temporal Causal Diagnostics and Context Purification


1️⃣ 一句话总结

这篇论文提出了一种名为AgentSentry的新方法,它通过分析多轮对话中的因果关系并净化被污染的上下文,有效防御了攻击者通过外部工具输出悄悄操控AI智能体的新型安全威胁,在保证正常任务完成的同时大幅提升了受攻击时的系统可用性。

源自 arXiv: 2602.22724