菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty

LLMs often exhibit Aha moments during reasoning, such as apparent self-correction following tokens like "Wait," yet their underlying mechanisms remain unclear. We introduce an information-theoretic framework that decomposes reasoning into procedural information and epistemic verbalization - the explicit externalization of uncertainty that supports downstream control actions. We show that purely procedural reasoning can become informationally stagnant, whereas epistemic verbalization enables continued information acquisition and is critical for achieving information sufficiency. Empirical results demonstrate that strong reasoning performance is driven by uncertainty externalization rather than specific surface tokens. Our framework unifies prior findings on Aha moments and post-training experiments, and offers insights for future reasoning model design.

顶级标签: llm theory model evaluation
详细标签: reasoning uncertainty information theory epistemic verbalization aha moments 或 搜索:

不确定性下通过策略性信息分配理解大语言模型的推理机制 / Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty


1️⃣ 一句话总结

这篇论文提出了一种信息论框架,认为大语言模型在推理时,通过将内在的不确定性明确表达出来(即‘认知言语化’),而非依赖特定的表面词汇,来持续获取信息并提升推理性能,这解释了模型看似‘顿悟’的自我修正现象。

源自 arXiv: 2603.15500