CSRv2:解锁超稀疏嵌入 / CSRv2: Unlocking Ultra-Sparse Embeddings
1️⃣ 一句话总结
这篇论文提出了一种名为CSRv2的新训练方法,它通过渐进式稀疏度调整和改进的训练目标,成功解决了现有稀疏嵌入技术在极端稀疏情况下性能大幅下降的问题,使得仅激活极少部分特征就能达到与高维稠密嵌入相当的性能,从而在保持高质量的同时,极大地提升了AI模型在存储、计算和推理速度上的效率。
In the era of large foundation models, the quality of embeddings has become a central determinant of downstream task performance and overall system capability. Yet widely used dense embeddings are often extremely high-dimensional, incurring substantial costs in storage, memory, and inference latency. To address these, Contrastive Sparse Representation (CSR) is recently proposed as a promising direction, mapping dense embeddings into high-dimensional but k-sparse vectors, in contrast to compact dense embeddings such as Matryoshka Representation Learning (MRL). Despite its promise, CSR suffers severe degradation in the ultra-sparse regime, where over 80% of neurons remain inactive, leaving much of its efficiency potential unrealized. In this paper, we introduce CSRv2, a principled training approach designed to make ultra-sparse embeddings viable. CSRv2 stabilizes sparsity learning through progressive k-annealing, enhances representational quality via supervised contrastive objectives, and ensures end-to-end adaptability with full backbone finetuning. CSRv2 reduces dead neurons from 80% to 20% and delivers a 14% accuracy gain at k=2, bringing ultra-sparse embeddings on par with CSR at k=8 and MRL at 32 dimensions, all with only two active features. While maintaining comparable performance, CSRv2 delivers a 7x speedup over MRL, and yields up to 300x improvements in compute and memory efficiency relative to dense embeddings in text representation. Extensive experiments across text and vision demonstrate that CSRv2 makes ultra-sparse embeddings practical without compromising performance, where CSRv2 achieves 7%/4% improvement over CSR when k=4 and further increases this gap to 14%/6% when k=2 in text/vision representation. By making extreme sparsity viable, CSRv2 broadens the design space for real-time and edge-deployable AI systems where both embedding quality and efficiency are critical.
CSRv2:解锁超稀疏嵌入 / CSRv2: Unlocking Ultra-Sparse Embeddings
这篇论文提出了一种名为CSRv2的新训练方法,它通过渐进式稀疏度调整和改进的训练目标,成功解决了现有稀疏嵌入技术在极端稀疏情况下性能大幅下降的问题,使得仅激活极少部分特征就能达到与高维稠密嵌入相当的性能,从而在保持高质量的同时,极大地提升了AI模型在存储、计算和推理速度上的效率。
源自 arXiv: 2602.05735