菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-03
📄 Abstract - DynSplit-KV: Dynamic Semantic Splitting for KVCache Compression in Efficient Long-Context LLM Inference

Although Key-Value (KV) Cache is essential for efficient large language models (LLMs) inference, its growing memory footprint in long-context scenarios poses a significant bottleneck, making KVCache compression crucial. Current compression methods rely on rigid splitting strategies, such as fixed intervals or pre-defined delimiters. We observe that rigid splitting suffers from significant accuracy degradation (ranging from 5.5% to 55.1%) across different scenarios, owing to the scenario-dependent nature of the semantic boundaries. This highlights the necessity of dynamic semantic splitting to match semantics. To achieve this, we face two challenges. (1) Improper delimiter selection misaligns semantics with the KVCache, resulting in 28.6% accuracy loss. (2) Variable-length blocks after splitting introduce over 73.1% additional inference overhead. To address the above challenges, we propose DynSplit-KV, a KVCache compression method that dynamically identifies delimiters for splitting. We propose: (1) a dynamic importance-aware delimiter selection strategy, improving accuracy by 49.9%. (2) A uniform mapping strategy that transforms variable-length semantic blocks into a fixed-length format, reducing inference overhead by 4.9x. Experiments show that DynSplit-KV achieves the highest accuracy, 2.2x speedup compared with FlashAttention and 2.6x peak memory reduction in long-context scenarios.

顶级标签: llm systems model training
详细标签: kv cache compression long-context inference memory efficiency semantic splitting efficient transformers 或 搜索:

DynSplit-KV:用于高效长上下文大语言模型推理中键值缓存压缩的动态语义分割方法 / DynSplit-KV: Dynamic Semantic Splitting for KVCache Compression in Efficient Long-Context LLM Inference


1️⃣ 一句话总结

这篇论文提出了一种名为DynSplit-KV的新方法,它通过动态识别文本中的语义边界来智能分割和压缩大语言模型推理过程中的键值缓存,从而在长文本处理场景下,显著提升了处理速度、降低了内存占用,同时保持了模型的准确性。

源自 arXiv: 2602.03184