菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-30
📄 Abstract - Evaluating Privilege Usage of Agents on Real-World Tools

Equipping LLM agents with real-world tools can substantially improve productivity. However, granting agents autonomy over tool use also transfers the associated privileges to both the agent and the underlying LLM. Improper privilege usage may lead to serious consequences, including information leakage and infrastructure damage. While several benchmarks have been built to study agents' security, they often rely on pre-coded tools and restricted interaction patterns. Such crafted environments differ substantially from the real-world, making it hard to assess agents' security capabilities in critical privilege control and usage. Therefore, we propose GrantBox, a security evaluation sandbox for analyzing agent privilege usage. GrantBox automatically integrates real-world tools and allows LLM agents to invoke genuine privileges, enabling the evaluation of privilege usage under prompt injection attacks. Our results indicate that while LLMs exhibit basic security awareness and can block some direct attacks, they remain vulnerable to more sophisticated attacks, resulting in an average attack success rate of 84.80% in carefully crafted scenarios.

顶级标签: llm agents model evaluation
详细标签: security evaluation privilege control tool usage prompt injection real-world tools 或 搜索:

评估智能体在现实世界工具上的权限使用 / Evaluating Privilege Usage of Agents on Real-World Tools


1️⃣ 一句话总结

这篇论文提出了一个名为GrantBox的安全评估沙箱,用于测试配备了真实工具的AI智能体在面临复杂攻击时的权限使用安全性,发现即使AI具备基本安全意识,但在精心设计的攻击下其防御依然脆弱,平均攻击成功率高达84.8%。

源自 arXiv: 2603.28166