Transformer中的自适应循环与记忆机制:是深入思考还是博闻强记? / Adaptive Loops and Memory in Transformers: Think Harder or Know More?
1️⃣ 一句话总结
这篇论文提出了一种结合了自适应循环机制和记忆库的新型Transformer模型,它通过让模型的不同部分学会“反复思考”或“存取知识”,在数学推理和常识任务上均取得了优于传统深层模型的性能。
Chain-of-thought (CoT) prompting enables reasoning in language models but requires explicit verbalization of intermediate steps. Looped transformers offer an alternative by iteratively refining representations within hidden states. This parameter efficiency comes at a cost, as looped models lack the storage capacity of deeper models which use unique weights per layer. In this work, we investigate transformer models that feature both adaptive per-layer looping, where each transformer block learns to iterate its hidden state via a learned halting mechanism, and gated memory banks, that provide additional learned storage. We find that looping primarily benefits mathematical reasoning, while memory banks help recover performance on commonsense tasks compared to parameter and FLOP matched models. Combining both mechanisms yields a model that outperforms an iso-FLOP baseline -- with three times the number of layers -- on math benchmarks. Analysis of model internals reveals layer specialization: early layers learn to loop minimally and access memory sparingly, while later layers do both more heavily.
Transformer中的自适应循环与记忆机制:是深入思考还是博闻强记? / Adaptive Loops and Memory in Transformers: Think Harder or Know More?
这篇论文提出了一种结合了自适应循环机制和记忆库的新型Transformer模型,它通过让模型的不同部分学会“反复思考”或“存取知识”,在数学推理和常识任务上均取得了优于传统深层模型的性能。
源自 arXiv: 2603.08391