菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - Locate-then-Sparsify: Attribution Guided Sparse Strategy for Visual Hallucination Mitigation

Despite the significant advancements in Large Vision-Language Models (LVLMs), their tendency to generate hallucinations undermines reliability and restricts broader practical deployment. Among the hallucination mitigation methods, feature steering emerges as a promising approach that reduces erroneous outputs in LVLMs without increasing inference costs. However, current methods apply uniform feature steering across all layers. This heuristic strategy ignores inter-layer differences, potentially disrupting layers unrelated to hallucinations and ultimately leading to performance degradation on general tasks. In this paper, we propose a plug-and-play framework called Locate-Then-Sparsify for Feature Steering (LTS-FS), which controls the steering intensity according to the hallucination relevance of each layer. We first construct a synthetic dataset comprising token-level and sentence-level hallucination cases. Based on this dataset, we introduce an attribution method based on causal interventions to quantify the hallucination relevance of each layer. With the attribution scores across layers, we propose a layerwise strategy that converts these scores into feature steering intensities for individual layers, enabling more precise adjustments specifically on hallucination-relevant layers. Extensive experiments across multiple LVLMs and benchmarks demonstrate that our LTS-FS framework effectively mitigates hallucination while preserving strong performance.

顶级标签: llm multi-modal model evaluation
详细标签: visual hallucination feature steering causal attribution layerwise sparsity vision-language models 或 搜索:

先定位后稀疏化:一种用于缓解视觉幻觉的归因引导稀疏策略 / Locate-then-Sparsify: Attribution Guided Sparse Strategy for Visual Hallucination Mitigation


1️⃣ 一句话总结

这篇论文提出了一种名为LTS-FS的即插即用框架,它通过量化大视觉语言模型中每一层与幻觉问题的关联程度,从而有针对性地调整这些层的特征,在有效减少模型幻觉的同时,不影响其完成其他一般任务的能力。

源自 arXiv: 2603.16284