菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-15
📄 Abstract - Janus: Disaggregating Attention and Experts for Scalable MoE Inference

Large Mixture-of-Experts (MoE) model inference is challenging due to high resource demands and dynamic workloads. Existing solutions often deploy the entire model as a single monolithic unit, which applies a unified resource configuration to both attention and expert modules despite their different requirements, leading to limited scalability and resource inefficiency. In this paper, we propose Janus, a scalable MoE inference system that disaggregates attention and experts on separate GPU sub-clusters, enabling each module to be managed and scaled independently. Janus incorporates three key designs for efficient, disaggregated MoE inference. First, it proposes an adaptive two-phase communication scheme that exploits intra- and inter-node bandwidth hierarchies for low-latency data exchange. Second, motivated by the memory-bound nature of MoE modules, Janus introduces a lightweight scheduler and implements it as a GPU kernel to balance the number of activated experts across GPUs at minimal overhead, thereby reducing inference latency. Third, Janus performs fine-grained resource management to dynamically adjust expert placement and independently scale attention and MoE resources to improve overall efficiency. Evaluation shows Janus achieves up to 3.9 higher perGPU throughput than state-of-the-art systems while meeting per-token latency requirements.

顶级标签: systems model training machine learning
详细标签: mixture-of-experts inference system gpu disaggregation resource management scalability 或 搜索:

Janus:解耦注意力与专家模块以实现可扩展的MoE模型推理 / Janus: Disaggregating Attention and Experts for Scalable MoE Inference


1️⃣ 一句话总结

这篇论文提出了一个名为Janus的新型推理系统,它通过将大型混合专家模型中的注意力模块和专家模块拆分到不同的GPU集群上独立管理,从而解决了现有方案资源效率低、扩展性差的问题,显著提升了推理速度和系统吞吐量。


源自 arXiv: 2512.13525