何时记忆与何时停止:用于长上下文推理的门控循环记忆 / When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context Reasoning
1️⃣ 一句话总结
这篇论文提出了一个名为GRU-Mem的新方法,通过引入两个由文本控制的‘门’来智能决定何时更新记忆和何时停止计算,从而让大语言模型在处理长文本时既更准确又更高效。
While reasoning over long context is crucial for various real-world applications, it remains challenging for large language models (LLMs) as they suffer from performance degradation as the context length grows. Recent work MemAgent has tried to tackle this by processing context chunk-by-chunk in an RNN-like loop and updating a textual memory for final answering. However, this naive recurrent memory update faces two crucial drawbacks: (i) memory can quickly explode because it can update indiscriminately, even on evidence-free chunks; and (ii) the loop lacks an exit mechanism, leading to unnecessary computation after even sufficient evidence is collected. To address these issues, we propose GRU-Mem, which incorporates two text-controlled gates for more stable and efficient long-context reasoning. Specifically, in GRU-Mem, the memory only updates when the update gate is open and the recurrent loop will exit immediately once the exit gate is open. To endow the model with such capabilities, we introduce two reward signals $r^{\text{update}}$ and $r^{\text{exit}}$ within end-to-end RL, rewarding the correct updating and exiting behaviors respectively. Experiments on various long-context reasoning tasks demonstrate the effectiveness and efficiency of GRU-Mem, which generally outperforms the vanilla MemAgent with up to 400\% times inference speed acceleration.
何时记忆与何时停止:用于长上下文推理的门控循环记忆 / When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context Reasoning
这篇论文提出了一个名为GRU-Mem的新方法,通过引入两个由文本控制的‘门’来智能决定何时更新记忆和何时停止计算,从而让大语言模型在处理长文本时既更准确又更高效。
源自 arXiv: 2602.10560