CAPT:用于减少视觉-语言错位的混淆感知提示调优 / CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment
1️⃣ 一句话总结
本文提出了一种名为CAPT的混淆感知提示调优框架,通过让视觉-语言模型学习自身在相似类别间产生的系统性分类错误,从而显著减少混淆、提升模型的判别力和泛化能力。
Vision-language models like CLIP have achieved remarkable progress in cross-modal representation learning, yet suffer from systematic misclassifications among visually and semantically similar categories. We observe that such confusion patterns are not random but persistently occur between specific category pairs, revealing the model's intrinsic bias and limited fine-grained discriminative ability. To address this, we propose CAPT, a Confusion-Aware Prompt Tuning framework that enables models to learn from their own misalignment. Specifically, we construct a Confusion Bank to explicitly model stable confusion relationships across categories and misclassified samples. On this basis, we introduce a Semantic Confusion Miner (SEM) to capture global inter-class confusion through semantic difference and commonality prompts, and a Sample Confusion Miner (SAM) to retrieve representative misclassified instances from the bank and capture sample-level cues through a Diff-Manner Adapter that integrates global and local contexts. To further unify confusion information across different granularities, a Multi-Granularity Difference Expert (MGDE) module is designed to jointly leverage semantic- and sample-level experts for more robust confusion-aware reasoning. Extensive experiments on 11 benchmark datasets demonstrate that our method significantly reduces confusion-induced errors while enhancing the discriminability and generalization of both base and novel classes, successfully resolving 50.72 percent of confusable sample pairs. Code will be released at this https URL.
CAPT:用于减少视觉-语言错位的混淆感知提示调优 / CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment
本文提出了一种名为CAPT的混淆感知提示调优框架,通过让视觉-语言模型学习自身在相似类别间产生的系统性分类错误,从而显著减少混淆、提升模型的判别力和泛化能力。
源自 arXiv: 2603.02557