菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-23
📄 Abstract - Least-Loaded Expert Parallelism: Load Balancing An Imbalanced Mixture-of-Experts

Mixture-of-Experts (MoE) models are typically pre-trained with explicit load-balancing constraints to ensure statistically balanced expert routing. Despite this, we observe that even well-trained MoE models exhibit significantly imbalanced routing. This behavior is arguably natural-and even desirable - as imbalanced routing allows models to concentrate domain-specific knowledge within a subset of experts. Expert parallelism (EP) is designed to scale MoE models by distributing experts across multiple devices, but with a less-discussed assumption of balanced routing. Under extreme imbalance, EP can funnel a disproportionate number of tokens to a small number of experts, leading to compute- and memory-bound failures on overloaded devices during post-training or inference, where explicit load balancing is often inapplicable. We propose Least-Loaded Expert Parallelism (LLEP), a novel EP algorithm that dynamically reroutes excess tokens and associated expert parameters from overloaded devices to underutilized ones. This ensures that all devices complete their workloads within the minimum collective latency while respecting memory constraints. Across different model scales, LLEP achieves up to 5x speedup and 4x reduction in peak memory usage compared to standard EP. This enables faster and higher-throughput post-training and inference, with ~1.9x faster for gpt-oss-120b. We support our method with extensive theoretical analysis and comprehensive empirical evaluations, including ablation studies. These results illuminate key trade-offs and enable a principled framework for hardware-specific hyper-parameter tuning to achieve optimal performance.

顶级标签: systems model training machine learning
详细标签: mixture-of-experts load balancing expert parallelism distributed training model inference 或 搜索:

最小负载专家并行:一种针对不平衡专家混合模型的负载均衡方法 / Least-Loaded Expert Parallelism: Load Balancing An Imbalanced Mixture-of-Experts


1️⃣ 一句话总结

本文提出了一种名为‘最小负载专家并行’的新算法,它通过动态地将过载设备上的计算任务和参数转移到空闲设备上,有效解决了专家混合模型在推理时因任务分配不均导致的设备性能瓶颈问题,从而大幅提升了模型运行速度和内存使用效率。

源自 arXiv: 2601.17111