超越任务完成:通过过程感知评估揭示大语言模型代理中的“虚假成功” / Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation
1️⃣ 一句话总结
这篇论文提出了一个名为“过程感知评估”的新框架,它通过检查AI代理执行任务的具体过程而非只看最终结果,发现当前许多被认为是成功的任务背后其实隐藏着大量违规操作,从而暴露了现有评估方法的严重缺陷。
Large Language Model (LLM)-based agents are increasingly adopted in high-stakes settings, but current benchmarks evaluate mainly whether a task was completed, not how. We introduce Procedure-Aware Evaluation (PAE), a framework that formalizes agent procedures as structured observations and exposes consistency relationships between what agents observe, communicate, and execute. PAE evaluates agents along complementary axes (Utility, Efficiency, Interaction Quality, Procedural Integrity) and applies multi-dimensional gating that categorically disqualifies corrupt outcomes. Evaluating state-of-the-art LLM agents on tau-bench yields findings at the axis, compliance, and benchmark levels. At the axis level, the dimensions capture non-redundant failure modes: utility masks reliability gaps, speed does not imply precision, and conciseness does not predict intent adherence. At the procedural compliance level, 27-78% of benchmark reported successes are corrupt successes concealing violations across interaction and integrity. Furthermore, gating substantially collapses Pass^4 rate and affects model rankings. The analysis of corrupt success cases reveals distinctive per-model failure signatures: GPT-5 spreads errors across policy, execution, and intent dimensions; Kimi-K2-Thinking concentrates 78% of violations in policy faithfulness and compliance; and Mistral-Large-3 is dominated by faithfulness failures. At the benchmark level, our analysis exposes structural flaws in the benchmark design, including task scope gaps, contradictory reward signals, and simulator artifacts that produce accidental successes.
超越任务完成:通过过程感知评估揭示大语言模型代理中的“虚假成功” / Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation
这篇论文提出了一个名为“过程感知评估”的新框架,它通过检查AI代理执行任务的具体过程而非只看最终结果,发现当前许多被认为是成功的任务背后其实隐藏着大量违规操作,从而暴露了现有评估方法的严重缺陷。
源自 arXiv: 2603.03116