菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-28
📄 Abstract - $\mathbb{R}^{2k}$ is Theoretically Large Enough for Embedding-based Top-$k$ Retrieval

This paper studies the minimal dimension required to embed subset memberships ($m$ elements and ${m\choose k}$ subsets of at most $k$ elements) into vector spaces, denoted as Minimal Embeddable Dimension (MED). The tight bounds of MED are derived theoretically and supported empirically for various notions of "distances" or "similarities," including the $\ell_2$ metric, inner product, and cosine similarity. In addition, we conduct numerical simulation in a more achievable setting, where the ${m\choose k}$ subset embeddings are chosen as the centroid of the embeddings of the contained elements. Our simulation easily realizes a logarithmic dependency between the MED and the number of elements to embed. These findings imply that embedding-based retrieval limitations stem primarily from learnability challenges, not geometric constraints, guiding future algorithm design.

顶级标签: theory machine learning model training
详细标签: embedding dimension top-k retrieval geometric constraints subset membership vector space embedding 或 搜索:

理论上R^2k维空间足以实现基于嵌入的Top-k检索 / $\mathbb{R}^{2k}$ is Theoretically Large Enough for Embedding-based Top-$k$ Retrieval


1️⃣ 一句话总结

这篇论文通过理论和实验证明,在多种相似度度量方式下,仅需2k维的向量空间就足以精确编码所有元素及其子集的关系,从而指出当前基于嵌入的检索系统性能瓶颈主要在于模型的学习能力,而非向量空间的几何限制。

源自 arXiv: 2601.20844