菜单

🤖 系统
📄 Abstract - The Collaboration Gap

The trajectory of AI development suggests that we will increasingly rely on agent-based systems composed of independently developed agents with different information, privileges, and tools. The success of these systems will critically depend on effective collaboration among these heterogeneous agents, even under partial observability. Despite intense interest, few empirical studies have evaluated such agent-agent collaboration at scale. We propose a collaborative maze-solving benchmark that (i) isolates collaborative capabilities, (ii) modulates problem complexity, (iii) enables scalable automated grading, and (iv) imposes no output-format constraints, preserving ecological plausibility. Using this framework, we evaluate 32 leading open- and closed-source models in solo, homogeneous, and heterogeneous pairings. Our results reveal a "collaboration gap": models that perform well solo often degrade substantially when required to collaborate. Collaboration can break down dramatically; for instance, small distilled models that solve mazes well alone may fail almost completely in certain pairings. We find that starting with the stronger agent often improves outcomes, motivating a "relay inference" approach where the stronger agent leads before handing off to the weaker one, closing much of the gap. Our findings argue for (1) collaboration-aware evaluation, (2) training strategies developed to enhance collaborative capabilities, and (3) interaction design that reliably elicits agents' latent skills, guidance that applies to AI-AI and human-AI collaboration.

顶级标签: agents benchmark model evaluation
详细标签: multi-agent collaboration benchmark design partial observability heterogeneous agents relay inference 或 搜索:

📄 论文总结

协作鸿沟 / The Collaboration Gap


1️⃣ 一句话总结

这篇论文通过一个迷宫求解实验发现,即使单个AI模型表现优秀,它们在相互协作时性能也会显著下降,揭示了AI系统间存在的‘协作鸿沟’,并提出了由强模型主导的‘接力推理’方法来改善协作效果。


📄 打开原文 PDF