菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-23
📄 Abstract - From Tokens to Concepts: Leveraging SAE for SPLADE

Learned Sparse IR models, such as SPLADE, offer an excellent efficiency-effectiveness tradeoff. However, they rely on the underlying backbone vocabulary, which might hinder performance (polysemicity and synonymy) and pose a challenge for multi-lingual and multi-modal usages. To solve this limitation, we propose to replace the backbone vocabulary with a latent space of semantic concepts learned using Sparse Auto-Encoders (SAE). Throughout this paper, we study the compatibility of these 2 concepts, explore training approaches, and analyze the differences between our SAE-SPLADE model and traditional SPLADE models. Our experiments demonstrate that SAE-SPLADE achieves retrieval performance comparable to SPLADE on both in-domain and out-of-domain tasks while offering improved efficiency.

顶级标签: natural language processing machine learning information retrieval
详细标签: sparse retrieval splade sparse auto-encoders semantic representation model efficiency 或 搜索:

从词汇到概念:利用稀疏自编码器改进SPLADE检索模型 / From Tokens to Concepts: Leveraging SAE for SPLADE


1️⃣ 一句话总结

该论文提出了一种改进的检索模型SAE-SPLADE,通过用稀疏自编码器学习到的语义概念空间替代传统词汇表,在保持与原始SPLADE模型相当检索性能的同时,更高效地解决了多义词和同义词问题,并提升了模型在多语言和跨模态场景下的适应性。

源自 arXiv: 2604.21511