菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-22
📄 Abstract - Agentic Confidence Calibration

AI agents are rapidly advancing from passive language models to autonomous systems executing complex, multi-step tasks. Yet their overconfidence in failure remains a fundamental barrier to deployment in high-stakes settings. Existing calibration methods, built for static single-turn outputs, cannot address the unique challenges of agentic systems, such as compounding errors along trajectories, uncertainty from external tools, and opaque failure modes. To address these challenges, we introduce, for the first time, the problem of Agentic Confidence Calibration and propose Holistic Trajectory Calibration (HTC), a novel diagnostic framework that extracts rich process-level features ranging from macro dynamics to micro stability across an agent's entire trajectory. Powered by a simple, interpretable model, HTC consistently surpasses strong baselines in both calibration and discrimination, across eight benchmarks, multiple LLMs, and diverse agent frameworks. Beyond performance, HTC delivers three essential advances: it provides interpretability by revealing the signals behind failure, enables transferability by applying across domains without retraining, and achieves generalization through a General Agent Calibrator (GAC) that achieves the best calibration (lowest ECE) on the out-of-domain GAIA benchmark. Together, these contributions establish a new process-centric paradigm for confidence calibration, providing a framework for diagnosing and enhancing the reliability of AI agents.

顶级标签: agents model evaluation llm
详细标签: confidence calibration agent reliability trajectory analysis evaluation framework error diagnosis 或 搜索:

智能体置信度校准 / Agentic Confidence Calibration


1️⃣ 一句话总结

这篇论文针对AI智能体在执行复杂任务时过度自信的问题,首次提出了‘智能体置信度校准’概念,并开发了一个名为‘整体轨迹校准’的新方法,通过分析任务执行全过程来更准确地评估和校准智能体的可靠性,从而提升其在关键场景下的安全性。

源自 arXiv: 2601.15778