通过注意力失衡矫正缓解大型视觉语言模型中的物体幻觉问题 / Mitigating Object Hallucinations in LVLMs via Attention Imbalance Rectification
1️⃣ 一句话总结
这篇论文发现大型视觉语言模型产生‘物体幻觉’(即描述图片中不存在的物体)的根本原因是模型注意力分配失衡,并据此提出了一种轻量级的解码时干预方法,通过重新分配注意力权重来有效减少幻觉,同时还能提升模型在其他视觉语言任务上的综合表现。
Object hallucination in Large Vision-Language Models (LVLMs) severely compromises their reliability in real-world applications, posing a critical barrier to their deployment in high-stakes scenarios such as autonomous driving and medical image analysis. Through systematic empirical investigation, we identify that the imbalanced attention allocation, both across modalities (i.e., vision and language) and within modalities (among individual tokens), exhibits a strong causal correlation with the occurrence of object hallucination. Leveraging this insight, we introduce a novel concept termed attention imbalance, which not only quantifies the degree of attention disparity but also visually delineates the underlying patterns (e.g., over-attentiveness to irrelevant language tokens or under-attentiveness to discriminative visual features) that drive object hallucination. To mitigate object hallucination, we further propose Attention Imbalance Rectification (AIR), a lightweight decoding-time intervention method that reallocates attention weights and adjusts attention distributions to rectify modality-wise and token-wise imbalances. Extensive evaluations on four mainstream LVLMs and three benchmarks (CHAIR, POPE, and MM-Vet) with seven baselines demonstrate that AIR consistently reduces object hallucination rates, achieving up to a 35.1% reduction compared to the baselines, while improving up to 15.9% of LVLMs' general capability across diverse vision-language tasks.
通过注意力失衡矫正缓解大型视觉语言模型中的物体幻觉问题 / Mitigating Object Hallucinations in LVLMs via Attention Imbalance Rectification
这篇论文发现大型视觉语言模型产生‘物体幻觉’(即描述图片中不存在的物体)的根本原因是模型注意力分配失衡,并据此提出了一种轻量级的解码时干预方法,通过重新分配注意力权重来有效减少幻觉,同时还能提升模型在其他视觉语言任务上的综合表现。
源自 arXiv: 2603.24058