菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-01
📄 Abstract - Wired for Overconfidence: A Mechanistic Perspective on Inflated Verbalized Confidence in LLMs

Large language models are often not just wrong, but \emph{confidently wrong}: when they produce factually incorrect answers, they tend to verbalize overly high confidence rather than signal uncertainty. Such verbalized overconfidence can mislead users and weaken confidence scores as a reliable uncertainty signal, yet its internal mechanisms remain poorly understood. We present a circuit-level mechanistic analysis of this inflated verbalized confidence in LLMs, organized around three axes: capturing verbalized confidence as a differentiable internal signal, identifying the circuits that causally inflate it, and leveraging these insights for targeted inference-time recalibration. Across two instruction-tuned LLMs on three datasets, we find that a compact set of MLP blocks and attention heads, concentrated in middle-to-late layers, consistently writes the confidence-inflation signal at the final token position. We further show that targeted inference-time interventions on these circuits substantially improve calibration. Together, our results suggest that verbalized overconfidence in LLMs is driven by identifiable internal circuits and can be mitigated through targeted intervention.

顶级标签: llm model evaluation theory
详细标签: mechanistic interpretability confidence calibration circuit analysis uncertainty quantification mlp attention 或 搜索:

过度自信的根源:从机制视角看大语言模型中夸大的言语化自信 / Wired for Overconfidence: A Mechanistic Perspective on Inflated Verbalized Confidence in LLMs


1️⃣ 一句话总结

这篇论文通过分析大语言模型内部的工作机制,发现了一小部分特定的“电路”是导致模型在回答错误问题时仍然表现出过度自信的根源,并证明通过针对性地干预这些电路,可以有效改善模型对自己回答的可靠程度评估。

源自 arXiv: 2604.01457