自路由:基于隐藏状态的免参数专家路由机制 / Self-Routing: Parameter-Free Expert Routing from Hidden States
1️⃣ 一句话总结
这篇论文提出了一种名为“自路由”的新方法,它无需额外的学习参数,直接利用模型内部隐藏状态的一部分信息来决定如何分配计算任务给不同的专家模块,在保持性能的同时简化了混合专家模型的结构并提升了资源利用的均衡性。
Mixture-of-Experts (MoE) layers increase model capacity by activating only a small subset of experts per token, and typically rely on a learned router to map hidden states to expert assignments. In this work, we ask whether a dedicated learned router is strictly necessary in the MoE settings we study. We propose Self-Routing, a parameter-free routing mechanism that uses a designated subspace of the token hidden state directly as expert logits, eliminating the router projection entirely while leaving the rest of the MoE layer unchanged. We evaluate Self-Routing on GPT-2-scale language modeling and ImageNet-1K classification by comparing it against a standard learned router, random-routing baselines, and dense non-MoE baselines. Our results show that Self-Routing remains competitive with the learned-router baseline while removing all dedicated routing parameters, and yields more balanced expert utilization, with about 17 % higher average normalized routing entropy and no explicit load-balancing loss. On ImageNet-1K with DeiT-S/16, Self-Routing also slightly improves over the corresponding learned-router MoE. These findings suggest that effective MoE routing can emerge from the hidden representation itself without requiring a separate learned router module.
自路由:基于隐藏状态的免参数专家路由机制 / Self-Routing: Parameter-Free Expert Routing from Hidden States
这篇论文提出了一种名为“自路由”的新方法,它无需额外的学习参数,直接利用模型内部隐藏状态的一部分信息来决定如何分配计算任务给不同的专家模块,在保持性能的同时简化了混合专家模型的结构并提升了资源利用的均衡性。
源自 arXiv: 2604.00421