通过推理时相关性传播缓解多模态大语言模型的幻觉问题 / Mitigating Multimodal LLMs Hallucinations via Relevance Propagation at Inference Time
1️⃣ 一句话总结
该论文提出了一种无需额外训练的方法LIME,通过在模型推理时动态增强对视觉或音频等感知输入的依赖,有效减少多模态大语言模型生成与输入不符的幻觉内容,从而提升输出的准确性和可靠性。
Multimodal large language models (MLLMs) have revolutionized the landscape of AI, demonstrating impressive capabilities in tackling complex vision and audio-language tasks. However, a critical challenge remains: these models often suffer from hallucinations, generating outputs that diverge from the provided perceptual inputs. This tendency stems from an inherent imbalance in modality utilization during inference, where the dominance of textual tokens undermines the potential of perceptual inputs. As a result, the model frequently resorts to textual language priors at the expense of grounded evidence. To tackle this issue, we propose Learning Inference-time Modality Enhancement (LIME), a training-free framework designed to bolster multimodal grounding by explicitly enhancing modality usage during decoding. LIME leverages Layer-wise Relevance Propagation (LRP) to quantify token-level contributions and defines a relevance-based objective that promotes increased reliance on perceptual inputs. This objective is enforced through inference-time updates to the model's key-value representations, without modifying model parameters or requiring additional training data. We evaluate LIME across multiple multimodal benchmarks in both vision and audio domains, demonstrating consistent reductions in hallucinations and enhanced grounding while preserving generation quality. Further analysis shows that LIME increases modality contribution and produces more localized and semantically aligned relevance patterns.
通过推理时相关性传播缓解多模态大语言模型的幻觉问题 / Mitigating Multimodal LLMs Hallucinations via Relevance Propagation at Inference Time
该论文提出了一种无需额外训练的方法LIME,通过在模型推理时动态增强对视觉或音频等感知输入的依赖,有效减少多模态大语言模型生成与输入不符的幻觉内容,从而提升输出的准确性和可靠性。
源自 arXiv: 2605.01766