潜在思考:面向大语言模型隐式推理的自适应锚点优化方法 / Thinking in Latents: Adaptive Anchor Refinement for Implicit Reasoning in LLMs
1️⃣ 一句话总结
这篇论文提出了一种名为AdaAnchor的新方法,它让大语言模型在内部进行‘无声’的迭代思考,通过动态调整思考步骤来高效解决数学应用题,能在保持甚至提升准确率的同时,大幅减少输出文本的长度和计算成本。
Token-level Chain-of-Thought (CoT) prompting has become a standard way to elicit multi-step reasoning in large language models (LLMs), especially for mathematical word problems. However, generating long intermediate traces increases output length and inference cost, and can be inefficient when the model could arrive at the correct answer without extensive verbalization. This has motivated latent-space reasoning approaches that shift computation into hidden representations and only emit a final answer. Yet, many latent reasoning methods depend on a fixed number of latent refinement steps at inference, adding another hyperparameter that must be tuned across models and datasets to balance accuracy and efficiency. We introduce AdaAnchor, a latent reasoning framework that performs silent iterative computation by refining a set of latent anchor vectors attached to the input. AdaAnchor further incorporates an adaptive halting mechanism that monitors anchor stability across iterations and terminates refinement once the anchor dynamics converge, allocating fewer steps to easier instances while reserving additional refinement steps for harder ones under a shared maximum-step budget. Our empirical evaluation across three mathematical word-problem benchmarks shows that AdaAnchor with adaptive halting yields accuracy gains of up to 5% over fixed-step latent refinement while reducing average latent refinement steps by 48-60% under the same maximum-step budget. Compared to standard reasoning baselines, AdaAnchor achieves large reductions in generated tokens (92-93%) by moving computation into silent latent refinement, offering a different accuracy-efficiency trade-off with substantially lower output-token usage.
潜在思考:面向大语言模型隐式推理的自适应锚点优化方法 / Thinking in Latents: Adaptive Anchor Refinement for Implicit Reasoning in LLMs
这篇论文提出了一种名为AdaAnchor的新方法,它让大语言模型在内部进行‘无声’的迭代思考,通过动态调整思考步骤来高效解决数学应用题,能在保持甚至提升准确率的同时,大幅减少输出文本的长度和计算成本。
源自 arXiv: 2603.15051