菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-08
📄 Abstract - Token-Level LLM Collaboration via FusionRoute

Large language models (LLMs) exhibit strengths across diverse domains. However, achieving strong performance across these domains with a single general-purpose model typically requires scaling to sizes that are prohibitively expensive to train and deploy. On the other hand, while smaller domain-specialized models are much more efficient, they struggle to generalize beyond their training distributions. To address this dilemma, we propose FusionRoute, a robust and effective token-level multi-LLM collaboration framework in which a lightweight router simultaneously (i) selects the most suitable expert at each decoding step and (ii) contributes a complementary logit that refines or corrects the selected expert's next-token distribution via logit addition. Unlike existing token-level collaboration methods that rely solely on fixed expert outputs, we provide a theoretical analysis showing that pure expert-only routing is fundamentally limited: unless strong global coverage assumptions hold, it cannot in general realize the optimal decoding policy. By augmenting expert selection with a trainable complementary generator, FusionRoute expands the effective policy class and enables recovery of optimal value functions under mild conditions. Empirically, across both Llama-3 and Gemma-2 families and diverse benchmarks spanning mathematical reasoning, code generation, and instruction following, FusionRoute outperforms both sequence- and token-level collaboration, model merging, and direct fine-tuning, while remaining competitive with domain experts on their respective tasks.

顶级标签: llm model training systems
详细标签: token-level routing multi-llm collaboration expert selection logit fusion decoding policy 或 搜索:

基于FusionRoute的令牌级大语言模型协作 / Token-Level LLM Collaboration via FusionRoute


1️⃣ 一句话总结

这篇论文提出了一个名为FusionRoute的轻量级协作框架,它通过在生成每个词时动态选择最合适的专家模型并融合一个补充的预测结果,让小模型协作也能达到或超越单个大模型的性能,同时避免了训练和部署超大模型的巨大成本。

源自 arXiv: 2601.05106