菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-31
📄 Abstract - Beyond pass@1: A Reliability Science Framework for Long-Horizon LLM Agents

Existing benchmarks measure capability -- whether a model succeeds on a single attempt -- but production deployments require reliability -- consistent success across repeated attempts on tasks of varying duration. We show these properties diverge systematically as task duration grows, and that pass@1 on short tasks is structurally blind to this divergence. We introduce a reliability science framework for long-horizon LLM agents with four metrics: Reliability Decay Curve (RDC), Variance Amplification Factor (VAF), Graceful Degradation Score (GDS), and Meltdown Onset Point (MOP). We evaluate 10 models across 23,392 episodes on a 396-task benchmark spanning four duration buckets and three domains. Key findings: (1) reliability decay is domain-stratified -- SE GDS drops from 0.90 to 0.44 while document processing is nearly flat (0.74 to 0.71); (2) VAF bifurcates by capability tier -- high VAF is a capability signature, not an instability signal; (3) capability and reliability rankings diverge substantially, with multi-rank inversions at long horizons; (4) frontier models have the highest meltdown rates (up to 19%) because they attempt ambitious multi-step strategies that sometimes spiral; and (5) memory scaffolds universally hurt long-horizon performance across all 10 models. These results motivate reliability as a first-class evaluation dimension alongside capability.

顶级标签: llm agents model evaluation
详细标签: reliability long-horizon agents benchmark evaluation metrics agent performance 或 搜索:

超越单次成功率:面向长周期大语言模型智能体的可靠性科学框架 / Beyond pass@1: A Reliability Science Framework for Long-Horizon LLM Agents


1️⃣ 一句话总结

这篇论文指出,衡量AI模型在单次任务中的成功率(能力)不足以评估其在长期、重复任务中的实际表现(可靠性),并提出了一个包含四个新指标的可靠性科学框架,通过大规模实验发现,模型的‘能力’排名与‘可靠性’排名在长周期任务中会显著不同,且最先进的模型反而更容易因尝试复杂策略而失败。

源自 arXiv: 2603.29231