菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-13
📄 Abstract - HTDC: Hesitation-Triggered Differential Calibration for Mitigating Hallucination in Large Vision-Language Models

Large vision-language models (LVLMs) achieve strong multimodal performance, but still suffer from hallucinations caused by unstable visual grounding and over-reliance on language priors. Existing training-free decoding methods typically apply calibration at every decoding step, introducing unnecessary computation and potentially disrupting stable predictions. We address this problem by identifying layer-wise hesitation, a simple signal of grounding instability reflected by fluctuations in token preference across intermediate layers. Based on this observation, we propose Hesitation-Triggered Differential Calibration (HTDC), a training-free decoding framework that preserves standard full-branch inference and activates calibration only at hesitation-prone steps. When triggered, HTDC contrasts the full branch with two lightweight probes, a visual-nullification probe and a semantic-nullification probe, to suppress hallucination-prone candidates while avoiding unnecessary intervention on stable steps. Experiments on representative hallucination benchmarks show that HTDC consistently reduces hallucinations while maintaining strong task accuracy, achieving a favorable trade-off between effectiveness and computational overhead.

顶级标签: llm multi-modal model evaluation
详细标签: hallucination mitigation vision-language models decoding calibration training-free method hesitation detection 或 搜索:

HTDC:基于犹豫触发的差分校准,用于减轻大型视觉语言模型中的幻觉问题 / HTDC: Hesitation-Triggered Differential Calibration for Mitigating Hallucination in Large Vision-Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为HTDC的新方法,它通过智能检测模型在生成回答时的‘犹豫’信号,只在模型可能产生幻觉的关键步骤进行轻量级校准,从而有效减少大型视觉语言模型‘胡说八道’的问题,同时保持了计算效率和回答的准确性。

源自 arXiv: 2604.12115