菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-03
📄 Abstract - Understanding Agent Scaling in LLM-Based Multi-Agent Systems via Diversity

LLM-based multi-agent systems (MAS) have emerged as a promising approach to tackle complex tasks that are difficult for individual LLMs. A natural strategy is to scale performance by increasing the number of agents; however, we find that such scaling exhibits strong diminishing returns in homogeneous settings, while introducing heterogeneity (e.g., different models, prompts, or tools) continues to yield substantial gains. This raises a fundamental question: what limits scaling, and why does diversity help? We present an information-theoretic framework showing that MAS performance is bounded by the intrinsic task uncertainty, not by agent count. We derive architecture-agnostic bounds demonstrating that improvements depend on how many effective channels the system accesses. Homogeneous agents saturate early because their outputs are strongly correlated, whereas heterogeneous agents contribute complementary evidence. We further introduce $K^*$, an effective channel count that quantifies the number of effective channels without ground-truth labels. Empirically, we show that heterogeneous configurations consistently outperform homogeneous scaling: 2 diverse agents can match or exceed the performance of 16 homogeneous agents. Our results provide principled guidelines for building efficient and robust MAS through diversity-aware design. Code and Dataset are available at the link: this https URL.

顶级标签: llm agents theory
详细标签: multi-agent systems scaling laws diversity information theory heterogeneous agents 或 搜索:

通过多样性理解基于大语言模型的多智能体系统中的智能体扩展 / Understanding Agent Scaling in LLM-Based Multi-Agent Systems via Diversity


1️⃣ 一句话总结

这项研究发现,在基于大语言模型的多智能体系统中,单纯增加同质智能体的数量对性能提升效果有限,而引入不同模型、提示或工具的异质性智能体则能通过提供互补信息,显著提升系统性能,其根本原因在于系统性能受限于任务本身的不确定性,而非智能体数量。

源自 arXiv: 2602.03794