菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-29
📄 Abstract - Expert Streaming: Accelerating Low-Batch MoE Inference via Multi-chiplet Architecture and Dynamic Expert Trajectory Scheduling

Mixture-of-Experts is a promising approach for edge AI with low-batch inference. Yet, on-device deployments often face limited on-chip memory and severe workload imbalance; the prevalent use of offloading further incurs off-chip memory access bottlenecks. Moreover, MoE sparsity and dynamic gating shift distributed strategies toward much finer granularity and introduce runtime scheduling considerations. Recently, high die-to-die bandwidth chiplet interconnects have created new opportunities for multi-chiplet systems to address workload imbalance and offloading bottlenecks with fine-grained scheduling. In this paper, we propose Fully Sharded Expert Data Parallelism, a parallelization paradigm specifically architected for low-batch MoE inference on multi-chiplet accelerators. FSE-DP attains adaptive computation-communication overlap and balanced load by orchestrating fine-grained, complementary expert streams along dynamic trajectories across high-bandwidth D2D links. The attendant dataflow complexity is tamed by a minimal, hardware-amenable set of virtualization rules and a lightweight scheduling algorithm. Our approach achieves 1.22 to 2.00 times speedup over state-of-the-art baselines and saves up to 78.8 percent on-chip memory.

顶级标签: systems model training machine learning
详细标签: mixture-of-experts inference acceleration chiplet architecture dynamic scheduling parallelization 或 搜索:

专家流式处理:通过多芯片粒架构与动态专家轨迹调度加速低批次MoE推理 / Expert Streaming: Accelerating Low-Batch MoE Inference via Multi-chiplet Architecture and Dynamic Expert Trajectory Scheduling


1️⃣ 一句话总结

这篇论文提出了一种名为‘全分片专家数据并行’的新方法,专门用于在由多个小芯片组成的高性能计算芯片上,高效运行低批次的专家混合模型,通过巧妙的动态调度和数据流设计,显著提升了推理速度并节省了大量芯片内存。

源自 arXiv: 2603.27624