菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-09
📄 Abstract - FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models

CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to mitigate generalization degradation caused by excessive focus on the foreground. Experiments on multiple backbone models and datasets show the effectiveness and compatibility of FVG-PT. Codes are available at: this https URL

顶级标签: natural language processing computer vision model training
详细标签: vision-language models prompt tuning attention guidance foreground distillation clip adaptation 或 搜索:

FVG-PT:面向视觉语言模型的自适应前景视图引导提示调优 / FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为FVG-PT的新方法,通过自动引导视觉模型更多地关注图像中的关键前景物体,并防止过度关注导致的性能下降,从而有效提升了现有视觉语言模型在特定任务上的适应能力和预测准确性。

源自 arXiv: 2603.08708