DASH-KV:通过非对称KV缓存哈希加速长上下文大模型推理 / DASH-KV: Accelerating Long-Context LLM Inference via Asymmetric KV Cache Hashing
1️⃣ 一句话总结
DASH-KV提出了一种利用非对称深度哈希技术将注意力计算转化为近似最近邻搜索的新方法,在长文本推理中实现了线性复杂度,在保持生成质量的同时大幅降低了计算开销。
The quadratic computational complexity of the standard attention mechanism constitutes a fundamental bottleneck for large language models in long-context inference. While existing KV cache compression methods alleviate memory pressure, they often sacrifice generation quality and fail to address the high overhead of floating-point arithmetic. This paper introduces DASH-KV, an innovative acceleration framework that reformulates attention as approximate nearest-neighbor search via asymmetric deep hashing. Under this paradigm, we design an asymmetric encoding architecture that differentially maps queries and keys to account for their distinctions in precision and reuse characteristics. To balance efficiency and accuracy, we further introduce a dynamic mixed-precision mechanism that adaptively retains full-precision computation for critical tokens. Extensive experiments on LongBench demonstrate that DASH-KV significantly outperforms state-of-the-art baseline methods while matching the performance of full attention, all while reducing inference complexity from O(N^2) to linear O(N). The code is available at this https URL
DASH-KV:通过非对称KV缓存哈希加速长上下文大模型推理 / DASH-KV: Accelerating Long-Context LLM Inference via Asymmetric KV Cache Hashing
DASH-KV提出了一种利用非对称深度哈希技术将注意力计算转化为近似最近邻搜索的新方法,在长文本推理中实现了线性复杂度,在保持生成质量的同时大幅降低了计算开销。
源自 arXiv: 2604.19351