客服中心AI中的工具感知规划:通过谱系引导的查询分解评估大语言模型 / Tool-Aware Planning in Contact Center AI: Evaluating LLMs through Lineage-Guided Query Decomposition
1️⃣ 一句话总结
这篇论文提出了一个用于客服中心场景的评估框架,通过将复杂的业务查询分解为可执行的步骤来测试大语言模型的工具规划能力,发现模型在处理多步骤和复杂查询时仍存在显著困难,并揭示了工具理解与使用方面的关键不足。
We present a domain-grounded framework and benchmark for tool-aware plan generation in contact centers, where answering a query for business insights, our target use case, requires decomposing it into executable steps over structured tools (Text2SQL (T2S)/Snowflake) and unstructured tools (RAG/transcripts) with explicit depends_on for parallelism. Our contributions are threefold: (i) a reference-based plan evaluation framework operating in two modes - a metric-wise evaluator spanning seven dimensions (e.g., tool-prompt alignment, query adherence) and a one-shot evaluator; (ii) a data curation methodology that iteratively refines plans via an evaluator->optimizer loop to produce high-quality plan lineages (ordered plan revisions) while reducing manual effort; and (iii) a large-scale study of 14 LLMs across sizes and families for their ability to decompose queries into step-by-step, executable, and tool-assigned plans, evaluated under prompts with and without lineage. Empirically, LLMs struggle on compound queries and on plans exceeding 4 steps (typically 5-15); the best total metric score reaches 84.8% (Claude-3-7-Sonnet), while the strongest one-shot match rate at the "A+" tier (Extremely Good, Very Good) is only 49.75% (o3-mini). Plan lineage yields mixed gains overall but benefits several top models and improves step executability for many. Our results highlight persistent gaps in tool-understanding, especially in tool-prompt alignment and tool-usage completeness, and show that shorter, simpler plans are markedly easier. The framework and findings provide a reproducible path for assessing and improving agentic planning with tools for answering data-analysis queries in contact-center settings.
客服中心AI中的工具感知规划:通过谱系引导的查询分解评估大语言模型 / Tool-Aware Planning in Contact Center AI: Evaluating LLMs through Lineage-Guided Query Decomposition
这篇论文提出了一个用于客服中心场景的评估框架,通过将复杂的业务查询分解为可执行的步骤来测试大语言模型的工具规划能力,发现模型在处理多步骤和复杂查询时仍存在显著困难,并揭示了工具理解与使用方面的关键不足。
源自 arXiv: 2602.14955