菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-23
📄 Abstract - Pyramid MoA: A Probabilistic Framework for Cost-Optimized Anytime Inference

Large Language Models (LLMs) face a persistent trade-off between inference cost and reasoning capability. While "Oracle" models (e.g., Llama-3-70B) achieve state-of-the-art accuracy, they are prohibitively expensive for high-volume deployment. Smaller models (e.g., 8B parameters) are cost-effective but struggle with complex tasks. In this work, we propose "Pyramid MoA", a hierarchical Mixture-of-Agents architecture that uses a lightweight Router to dynamically escalate queries only when necessary. By leveraging semantic agreement and confidence calibration among an ensemble of small models, our Router identifies "hard" problems with high precision. On the GSM8K benchmark, our system achieves 93.0% accuracy, effectively matching the Oracle baseline (98.0%) while reducing compute costs by 61%. We demonstrate that the system introduces negligible latency overhead (+0.82s) and allows for a tunable trade-off between performance and budget.

顶级标签: llm agents model evaluation
详细标签: mixture-of-agents cost optimization dynamic routing inference efficiency confidence calibration 或 搜索:

金字塔MoA:一种面向成本优化的任意时间推理概率框架 / Pyramid MoA: A Probabilistic Framework for Cost-Optimized Anytime Inference


1️⃣ 一句话总结

这篇论文提出了一种名为‘金字塔MoA’的智能系统架构,它通过一个轻量级调度器,仅在必要时将难题分配给更强的大模型处理,从而在保持与大模型相近的高准确率(例如在数学题上达到93%)的同时,显著降低了61%的计算成本,实现了性能与预算的高效平衡。

源自 arXiv: 2602.19509