倾听层间分歧:利用层间不一致性缓解大语言模型的幻觉问题 / Listen to the Layers: Mitigating Hallucinations with Inter-Layer Disagreement
1️⃣ 一句话总结
这篇论文提出了一种无需额外训练的推理方法,通过监测大语言模型中间层的内部信号来判断生成内容是否稳定,从而有效减少模型在问答、摘要等任务中“一本正经胡说八道”的幻觉现象。
Pretrained Large Language Models (LLMs) are prone to generating fluent yet factually incorrect text-a phenomenon known as hallucinations, undermining their reliability and utility in downstream tasks. We hypothesize that a generated text span's factuality is correlated with its representational instability across the model's internal layers. Based on this, we propose the CoCoA (Confusion and Consistency Aware) decoder, a novel, training-free decoding algorithm that mitigates hallucinations at inference time by listening to these signals in the middle layers. We propose two metrics to quantify this instability in the middle layers, and use it to penalize outputs that exhibit high internal confusion, thereby steering the model towards more internally consistent and factually grounded outputs. We further propose a self-information gated variant, CoCoA-SIG, that dynamically modulates this penalty to selectively target high-surprise, unstable generations. Extensive experiments on diverse tasks, including question-answering, summarization and code generation demonstrate that CoCoA significantly improves factual correctness across multiple model families (e.g., Llama-3, Qwen-2.5, Mistral). By leveraging model-intrinsic signals, CoCoA offers an effective and broadly applicable method for enhancing the trustworthiness of LLMs at inference time, without requiring any model retraining.
倾听层间分歧:利用层间不一致性缓解大语言模型的幻觉问题 / Listen to the Layers: Mitigating Hallucinations with Inter-Layer Disagreement
这篇论文提出了一种无需额外训练的推理方法,通过监测大语言模型中间层的内部信号来判断生成内容是否稳定,从而有效减少模型在问答、摘要等任务中“一本正经胡说八道”的幻觉现象。
源自 arXiv: 2602.09486