面向最优专家混合架构优化的整体性缩放定律 / Holistic Scaling Laws for Optimal Mixture-of-Experts Architecture Optimization
1️⃣ 一句话总结
这篇论文提出了一个全新的框架,通过同时考虑计算量、激活参数和总参数三个关键约束,解决了在庞大设计空间中为专家混合模型寻找最优架构的难题,为不同计算预算提供了完整且可灵活调整的架构配置方案。
Scaling laws for Large Language Models govern macroscopic resource allocation, yet translating them into precise Mixture-of-Experts (MoE) architectural configurations remains an open problem due to the combinatorially vast design space. Existing MoE scaling studies are constrained by experimental budgets to either augment scaling formulas with extra MoE variables, risking unreliable fits, or fix all non-MoE factors, ignoring global interactions. We propose a reusable framework for holistic MoE architectural optimization that bridges this gap. We first show that FLOPs per token alone is an inadequate fairness metric for MoE models because differing computational densities across layer types can inflate parameters without proportional compute cost, and establish a joint constraint triad of FLOPs per token, active parameters, and total parameters. We then reduce the 16-dimensional architectural search space to two sequential low-dimensional phases through algebraic constraints and a rank-preserving property of the hidden dimension. Validated across hundreds of MoE models spanning six orders of magnitude in compute, our framework yields robust scaling laws that map any compute budget to a complete, optimal MoE architecture. A key finding is that the near-optimal configuration band widens with scale, giving practitioners quantitative flexibility to balance scaling law recommendations against infrastructure constraints.
面向最优专家混合架构优化的整体性缩放定律 / Holistic Scaling Laws for Optimal Mixture-of-Experts Architecture Optimization
这篇论文提出了一个全新的框架,通过同时考虑计算量、激活参数和总参数三个关键约束,解决了在庞大设计空间中为专家混合模型寻找最优架构的难题,为不同计算预算提供了完整且可灵活调整的架构配置方案。
源自 arXiv: 2603.21862