菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - CCTU: A Benchmark for Tool Use under Complex Constraints

Solving problems through tool use under explicit constraints constitutes a highly challenging yet unavoidable scenario for large language models (LLMs), requiring capabilities such as function calling, instruction following, and self-refinement. However, progress has been hindered by the absence of dedicated evaluations. To address this, we introduce CCTU, a benchmark for evaluating LLM tool use under complex constraints. CCTU is grounded in a taxonomy of 12 constraint categories spanning four dimensions (i.e., resource, behavior, toolset, and response). The benchmark comprises 200 carefully curated and challenging test cases across diverse tool-use scenarios, each involving an average of seven constraint types and an average prompt length exceeding 4,700 tokens. To enable reliable evaluation, we develop an executable constraint validation module that performs step-level validation and enforces compliance during multi-turn interactions between models and their environments. We evaluate nine state-of-the-art LLMs in both thinking and non-thinking modes. Results indicate that when strict adherence to all constraints is required, no model achieves a task completion rate above 20%. Further analysis reveals that models violate constraints in over 50% of cases, particularly in the resource and response dimensions. Moreover, LLMs demonstrate limited capacity for self-refinement even after receiving detailed feedback on constraint violations, highlighting a critical bottleneck in the development of robust tool-use agents. To facilitate future research, we release the data and code.

顶级标签: llm benchmark agents
详细标签: tool use constraint evaluation function calling self-refinement validation module 或 搜索:

CCTU:复杂约束下工具使用的基准测试 / CCTU: A Benchmark for Tool Use under Complex Constraints


1️⃣ 一句话总结

这篇论文提出了一个名为CCTU的新基准测试,专门用于评估大语言模型在复杂约束条件下(如资源限制、行为规范等)使用工具的能力,结果发现当前最先进的模型在严格遵循所有约束的任务中成功率极低,且自我修正能力有限,揭示了该领域发展的关键瓶颈。

源自 arXiv: 2603.15309