NAACL:面向RAG系统中大语言模型的噪声感知语言置信度校准 / NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems
1️⃣ 一句话总结
这篇论文发现检索增强生成(RAG)系统中的噪声信息会导致大语言模型过度自信,并提出了一种名为NAACL的噪声感知校准框架,通过微调让模型学会识别噪声并准确评估自身回答的置信度,从而显著提升了模型的可信度。
Accurately assessing model confidence is essential for deploying large language models (LLMs) in mission-critical factual domains. While retrieval-augmented generation (RAG) is widely adopted to improve grounding, confidence calibration in RAG settings remains poorly understood. We conduct a systematic study across four benchmarks, revealing that LLMs exhibit poor calibration performance due to noisy retrieved contexts. Specifically, contradictory or irrelevant evidence tends to inflate the model's false certainty, leading to severe overconfidence. To address this, we propose NAACL Rules (Noise-AwAre Confidence CaLibration Rules) to provide a principled foundation for resolving overconfidence under noise. We further design NAACL, a noise-aware calibration framework that synthesizes supervision from about 2K HotpotQA examples guided by these rules. By performing supervised fine-tuning (SFT) with this data, NAACL equips models with intrinsic noise awareness without relying on stronger teacher models. Empirical results show that NAACL yields substantial gains, improving ECE scores by 10.9% in-domain and 8.0% out-of-domain. By bridging the gap between retrieval noise and verbal calibration, NAACL paves the way for both accurate and epistemically reliable LLMs.
NAACL:面向RAG系统中大语言模型的噪声感知语言置信度校准 / NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems
这篇论文发现检索增强生成(RAG)系统中的噪声信息会导致大语言模型过度自信,并提出了一种名为NAACL的噪声感知校准框架,通过微调让模型学会识别噪声并准确评估自身回答的置信度,从而显著提升了模型的可信度。
源自 arXiv: 2601.11004