菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-07
📄 Abstract - DoVer: Intervention-Driven Auto Debugging for LLM Multi-Agent Systems

Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at this https URL.

顶级标签: llm agents systems
详细标签: debugging multi-agent systems failure analysis intervention evaluation 或 搜索:

DoVer:面向大语言模型多智能体系统的干预驱动式自动调试方法 / DoVer: Intervention-Driven Auto Debugging for LLM Multi-Agent Systems


1️⃣ 一句话总结

这篇论文提出了一个名为DoVer的自动调试框架,它通过主动干预和验证来定位并修复大语言模型多智能体系统中的故障,显著提升了任务成功率,为复杂AI系统的可靠性调试提供了新思路。


源自 arXiv: 2512.06749