ForesightKV:通过学习长期贡献优化推理模型的KV缓存淘汰机制 / ForesightKV: Optimizing KV Cache Eviction for Reasoning Models by Learning Long-Term Contribution
1️⃣ 一句话总结
这篇论文提出了一个名为ForesightKV的智能缓存管理框架,它通过结合监督学习和强化学习来预测并淘汰推理过程中不重要的中间数据,从而在只使用一半缓存的情况下,显著提升大语言模型处理长文本时的效率和性能。
Recently, large language models (LLMs) have shown remarkable reasoning abilities by producing long reasoning traces. However, as the sequence length grows, the key-value (KV) cache expands linearly, incurring significant memory and computation costs. Existing KV cache eviction methods mitigate this issue by discarding less important KV pairs, but often fail to capture complex KV dependencies, resulting in performance degradation. To better balance efficiency and performance, we introduce ForesightKV, a training-based KV cache eviction framework that learns to predict which KV pairs to evict during long-text generations. We first design the Golden Eviction algorithm, which identifies the optimal eviction KV pairs at each step using future attention scores. These traces and the scores at each step are then distilled via supervised training with a Pairwise Ranking Loss. Furthermore, we formulate cache eviction as a Markov Decision Process and apply the GRPO algorithm to mitigate the significant language modeling loss increase on low-entropy tokens. Experiments on AIME2024 and AIME2025 benchmarks of three reasoning models demonstrate that ForesightKV consistently outperforms prior methods under only half the cache budget, while benefiting synergistically from both supervised and reinforcement learning approaches.
ForesightKV:通过学习长期贡献优化推理模型的KV缓存淘汰机制 / ForesightKV: Optimizing KV Cache Eviction for Reasoning Models by Learning Long-Term Contribution
这篇论文提出了一个名为ForesightKV的智能缓存管理框架,它通过结合监督学习和强化学习来预测并淘汰推理过程中不重要的中间数据,从而在只使用一半缓存的情况下,显著提升大语言模型处理长文本时的效率和性能。
源自 arXiv: 2602.03203