菜单

🤖 系统
📄 Abstract - Latent Collaboration in Multi-Agent Systems

Multi-agent systems (MAS) extend large language models (LLMs) from independent single-model reasoning to coordinative system-level intelligence. While existing LLM agents depend on text-based mediation for reasoning and communication, we take a step forward by enabling models to collaborate directly within the continuous latent space. We introduce LatentMAS, an end-to-end training-free framework that enables pure latent collaboration among LLM agents. In LatentMAS, each agent first performs auto-regressive latent thoughts generation through last-layer hidden embeddings. A shared latent working memory then preserves and transfers each agent's internal representations, ensuring lossless information exchange. We provide theoretical analyses establishing that LatentMAS attains higher expressiveness and lossless information preservation with substantially lower complexity than vanilla text-based MAS. In addition, empirical evaluations across 9 comprehensive benchmarks spanning math and science reasoning, commonsense understanding, and code generation show that LatentMAS consistently outperforms strong single-model and text-based MAS baselines, achieving up to 14.6% higher accuracy, reducing output token usage by 70.8%-83.7%, and providing 4x-4.3x faster end-to-end inference. These results demonstrate that our new latent collaboration framework enhances system-level reasoning quality while offering substantial efficiency gains without any additional training. Code and data are fully open-sourced at this https URL.

顶级标签: multi-agents llm systems
详细标签: latent collaboration multi-agent systems hidden embeddings latent working memory efficiency optimization 或 搜索:

📄 论文总结

多智能体系统中的潜在协作 / Latent Collaboration in Multi-Agent Systems


1️⃣ 一句话总结

这篇论文提出了一个名为LatentMAS的无训练框架,让多个AI智能体直接在内部表示空间中进行协作,相比传统基于文本交互的方法,不仅显著提升了推理准确率和效率,还大幅降低了计算和通信开销。


📄 打开原文 PDF