菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-06
📄 Abstract - REAM: Merging Improves Pruning of Experts in LLMs

Mixture-of-Experts (MoE) large language models (LLMs) are among the top-performing architectures. The largest models, often with hundreds of billions of parameters, pose significant memory challenges for deployment. Traditional approaches to reduce memory requirements include weight pruning and quantization. Motivated by the Router-weighted Expert Activation Pruning (REAP) that prunes experts, we propose a novel method, Router-weighted Expert Activation Merging (REAM). Instead of removing experts, REAM groups them and merges their weights, better preserving original performance. We evaluate REAM against REAP and other baselines across multiple MoE LLMs on diverse multiple-choice (MC) question answering and generative (GEN) benchmarks. Our results reveal a trade-off between MC and GEN performance that depends on the mix of calibration data. By controlling the mix of general, math and coding data, we examine the Pareto frontier of this trade-off and show that REAM often outperforms the baselines and in many cases is comparable to the original uncompressed models.

顶级标签: llm model training systems
详细标签: mixture-of-experts model compression expert merging memory efficiency sparse models 或 搜索:

REAM:通过合并改进大型语言模型中专家的剪枝 / REAM: Merging Improves Pruning of Experts in LLMs


1️⃣ 一句话总结

这篇论文提出了一种名为REAM的新方法,它通过智能地分组和合并大型混合专家模型中的专家权重来压缩模型,相比直接剪枝专家,能更好地保持模型在各种任务上的原始性能。

源自 arXiv: 2604.04356