菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Policy-Invisible Violations in LLM-Based Agents

LLM-based agents can execute actions that are syntactically valid, user-sanctioned, and semantically appropriate, yet still violate organizational policy because the facts needed for correct policy judgment are hidden at decision time. We call this failure mode policy-invisible violations: cases in which compliance depends on entity attributes, contextual state, or session history absent from the agent's visible context. We present PhantomPolicy, a benchmark spanning eight violation categories with balanced violation and safe-control cases, in which all tool responses contain clean business data without policy metadata. We manually review all 600 model traces produced by five frontier models and evaluate them using human-reviewed trace labels. Manual review changes 32 labels (5.3%) relative to the original case-level annotations, confirming the need for trace-level human review. To demonstrate what world-state-grounded enforcement can achieve under favorable conditions, we introduce Sentinel, an enforcement framework based on counterfactual graph simulation. Sentinel treats every agent action as a proposed mutation to an organizational knowledge graph, performs speculative execution to materialize the post-action world state, and verifies graph-structural invariants to decide Allow/Block/Clarify. Against human-reviewed trace labels, Sentinel substantially outperforms a content-only DLP baseline (68.8% vs. 93.0% accuracy) while maintaining high precision, though it still leaves room for improvement on certain violation categories. These results demonstrate what becomes achievable once policy-relevant world state is made available to the enforcement layer.

顶级标签: llm agents systems
详细标签: policy compliance agent safety benchmark knowledge graph enforcement framework 或 搜索:

基于大语言模型智能体中的策略不可见违规 / Policy-Invisible Violations in LLM-Based Agents


1️⃣ 一句话总结

这篇论文发现,基于大语言模型的智能体在执行看似合规的任务时,可能因为无法获取某些关键信息(如实体属性、会话历史)而违反组织策略,并提出了一个名为‘哨兵’的框架,通过模拟执行后的世界状态来更有效地检测这类违规行为。

源自 arXiv: 2604.12177