基于MoE的大语言模型中是否存在领域专家? / Do Domain-specific Experts exist in MoE-based LLMs?
1️⃣ 一句话总结
这篇论文通过实证研究发现,基于混合专家架构的大语言模型中确实存在专注于特定领域的专家,并在此基础上提出了一种无需额外训练或推理成本的领域导向框架,有效提升了模型在目标和非目标领域的性能。
In the era of Large Language Models (LLMs), the Mixture of Experts (MoE) architecture has emerged as an effective approach for training extremely large models with improved computational efficiency. This success builds upon extensive prior research aimed at enhancing expert specialization in MoE-based LLMs. However, the nature of such specializations and how they can be systematically interpreted remain open research challenges. In this work, we investigate this gap by posing a fundamental question: \textit{Do domain-specific experts exist in MoE-based LLMs?} To answer the question, we evaluate ten advanced MoE-based LLMs ranging from 3.8B to 120B parameters and provide empirical evidence for the existence of domain-specific experts. Building on this finding, we propose \textbf{Domain Steering Mixture of Experts (DSMoE)}, a training-free framework that introduces zero additional inference cost and outperforms both well-trained MoE-based LLMs and strong baselines, including Supervised Fine-Tuning (SFT). Experiments on four advanced open-source MoE-based LLMs across both target and non-target domains demonstrate that our method achieves strong performance and robust generalization without increasing inference cost or requiring additional retraining. Our implementation is publicly available at this https URL.
基于MoE的大语言模型中是否存在领域专家? / Do Domain-specific Experts exist in MoE-based LLMs?
这篇论文通过实证研究发现,基于混合专家架构的大语言模型中确实存在专注于特定领域的专家,并在此基础上提出了一种无需额外训练或推理成本的领域导向框架,有效提升了模型在目标和非目标领域的性能。
源自 arXiv: 2604.05267