超越静态裁剪:面向复杂推理任务的层自适应视觉定位与解码增强 / Beyond Static Cropping: Layer-Adaptive Visual Localization and Decoding Enhancement
1️⃣ 一句话总结
这篇论文发现,大视觉语言模型在不同任务中依赖不同网络层进行视觉定位,并据此提出了一种无需训练、能自适应选择关键视觉信息层以提升复杂视觉问答准确性的新方法。
Large Vision-Language Models (LVLMs) have advanced rapidly by aligning visual patches with the text embedding space, but a fixed visual-token budget forces images to be resized to a uniform pretraining resolution, often erasing fine-grained details and causing hallucinations via over-reliance on language priors. Recent attention-guided enhancement (e.g., cropping or region-focused attention allocation) alleviates this, yet it commonly hinges on a static "magic layer" empirically chosen on simple recognition benchmarks and thus may not transfer to complex reasoning tasks. In contrast to this static assumption, we propose a dynamic perspective on visual grounding. Through a layer-wise sensitivity analysis, we demonstrate that visual grounding is a dynamic process: while simple object recognition tasks rely on middle layers, complex visual search and reasoning tasks require visual information to be reactivated at deeper layers. Based on this observation, we introduce Visual Activation by Query (VAQ), a metric that identifies the layer whose attention map is most relevant to query-specific visual grounding by measuring attention sensitivity to the input query. Building on VAQ, we further propose LASER (Layer-adaptive Attention-guided Selective visual and decoding Enhancement for Reasoning), a training-free inference procedure that adaptively selects task-appropriate layers for visual localization and question answering. Experiments across diverse VQA benchmarks show that LASER significantly improves VQA accuracy across tasks with varying levels of complexity.
超越静态裁剪:面向复杂推理任务的层自适应视觉定位与解码增强 / Beyond Static Cropping: Layer-Adaptive Visual Localization and Decoding Enhancement
这篇论文发现,大视觉语言模型在不同任务中依赖不同网络层进行视觉定位,并据此提出了一种无需训练、能自适应选择关键视觉信息层以提升复杂视觉问答准确性的新方法。
源自 arXiv: 2602.04304