菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Source-Free Domain Adaptation with Vision-Language Prior

Source-Free Domain Adaptation (SFDA) seeks to adapt a source model, which is pre-trained on a supervised source domain, for a target domain, with only access to unlabeled target training data. Relying on pseudo labeling and/or auxiliary supervision, conventional methods are inevitably error-prone. To mitigate this limitation, in this work we for the first time explore the potentials of off-the-shelf vision-language (ViL) multimodal models (e.g., CLIP) with rich whilst heterogeneous knowledge. We find that directly applying the ViL model to the target domain in a zero-shot fashion is unsatisfactory, as it is not specialized for this particular task but largely generic. To make it task-specific, we propose a novel DIFO++ approach. Specifically, DIFO++ alternates between two steps during adaptation: (i) Customizing the ViL model by maximizing the mutual information with the target model in a prompt learning manner, (ii) Distilling the knowledge of this customized ViL model to the target model, centering on gap region reduction. During progressive knowledge adaptation, we first identify and focus on the gap region, where enclosed features are entangled and class-ambiguous, as it often captures richer task-specific semantics. Reliable pseudo-labels are then generated by fusing predictions from the target and ViL models, supported by a memory mechanism. Finally, gap region reduction is guided by category attention and predictive consistency for semantic alignment, complemented by referenced entropy minimization to suppress uncertainty. Extensive experiments show that DIFO++ significantly outperforms the state-of-the-art alternatives. Our code and data are available at this https URL.

顶级标签: computer vision model training multi-modal
详细标签: domain adaptation vision-language models source-free clip prompt learning 或 搜索:

利用视觉-语言先验进行无源域适应 / Source-Free Domain Adaptation with Vision-Language Prior


1️⃣ 一句话总结

这篇论文提出了一种名为DIFO++的新方法,它利用现成的通用视觉-语言模型(如CLIP)的知识,通过交替进行模型定制和知识蒸馏两个步骤,来帮助一个已训练好的模型在没有源数据的情况下,更好地适应新的、只有未标记数据的目标领域,从而显著提升了适应性能。

源自 arXiv: 2604.17748