面向专家混合嵌入模型的几何保持聚合方法 / Geometry-Preserving Aggregation for Mixture-of-Experts Embedding Models
1️⃣ 一句话总结
这篇论文发现当前专家混合嵌入模型使用的线性聚合方法会扭曲向量的几何结构,导致性能下降,并提出了一种新的球形聚合方法来解决这个问题,在不增加训练成本的情况下提升了模型在多项任务上的表现。
Mixture-of-Experts (MoE) embedding models combine expert outputs using weighted linear summation, implicitly assuming a linear subspace structure in the embedding space. This assumption is shown to be inconsistent with the geometry of expert representations. Geometric analysis of a modern MoE embedding model reveals that expert outputs lie on a shared hyperspherical manifold characterized by tightly concentrated norms and substantial angular separation. Under this geometry, linear aggregation induces inward collapse toward the manifold interior, distorting vector magnitude and direction and reducing embedding comparability. To address this inconsistency, Spherical Barycentric Aggregation (SBA) is introduced as a geometry-preserving aggregation operator that separates radial and angular components to maintain hyperspherical structure while remaining fully compatible with existing routing mechanisms. Experiments on selected tasks from the Massive Text Embedding Benchmark (MTEB), including semantic similarity, clustering, and duplicate question detection, demonstrate consistent performance improvements with identical training cost and full stability. Additional geometric analyses confirm that SBA prevents aggregation-induced collapse and preserves hyperspherical consistency, highlighting the importance of geometry-aware aggregation in MoE embedding architectures.
面向专家混合嵌入模型的几何保持聚合方法 / Geometry-Preserving Aggregation for Mixture-of-Experts Embedding Models
这篇论文发现当前专家混合嵌入模型使用的线性聚合方法会扭曲向量的几何结构,导致性能下降,并提出了一种新的球形聚合方法来解决这个问题,在不增加训练成本的情况下提升了模型在多项任务上的表现。
源自 arXiv: 2602.14039