推理缓存:通过短视界强化学习实现长视界的持续改进 / Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL
1️⃣ 一句话总结
这篇论文提出了一种名为‘推理缓存’的新算法,它能让大型语言模型在解决复杂推理问题时,通过迭代式的自我总结和改进,实现远超训练时所见范围的持续性能提升。
Large Language Models (LLMs) that can continually improve beyond their training budgets are able to solve increasingly difficult problems by adapting at test time, a property we refer to as extrapolation. However, standard reinforcement learning (RL) operates over fixed problem distributions and training budgets, which limits extrapolation amidst distribution shift at test time. To address this, we introduce RC, an iterative decoding algorithm that replaces standard autoregressive decoding during both training and inference. RC exploits an asymmetry between the response generation and summarization capabilities of LLMs to construct reasoning chains that consistently improve across iterations. Models trained to use RC can extrapolate and continually improve over reasoning horizons more than an order of magnitude longer than those seen during training. Empirically, training a 4B model with RC using a 16k-token training budget improves performance on HMMT 2025 from 40% to nearly 70% with 0.5m tokens at test time, outperforming both comparably sized models and many larger reasoning LLMs. Finally, we also show that models trained with RC can more effectively leverage existing scaffolds to further scale test-time performance, due to the improved summary-conditioned generation abilities learned through training.
推理缓存:通过短视界强化学习实现长视界的持续改进 / Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL
这篇论文提出了一种名为‘推理缓存’的新算法,它能让大型语言模型在解决复杂推理问题时,通过迭代式的自我总结和改进,实现远超训练时所见范围的持续性能提升。
源自 arXiv: 2602.03773