视觉语言模型的半监督少样本自适应 / Semi-Supervised Few-Shot Adaptation of Vision-Language Models
1️⃣ 一句话总结
这篇论文提出了一种利用未标注数据来辅助标注的视觉语言模型半监督学习方法,能在医学图像分类等标注成本高的任务中,用极少的标注样本(减少超过50%的标注量)有效提升模型在类别不平衡情况下的性能。
Vision-language models (VLMs) pre-trained on large, heterogeneous data sources are becoming increasingly popular, providing rich multi-modal embeddings that enable efficient transfer to new tasks. A particularly relevant application is few-shot adaptation, where only a handful of annotated examples are available to adapt the model through multi-modal linear probes. In medical imaging, specialized VLMs have shown promising performance in zero- and few-shot image classification, which is valuable for mitigating the high cost of expert annotations. However, challenges remain in extremely low-shot regimes: the inherent class imbalances in medical tasks often lead to underrepresented categories, penalizing overall model performance. To address this limitation, we propose leveraging unlabeled data by introducing an efficient semi-supervised solver that propagates text-informed pseudo-labels during few-shot adaptation. The proposed method enables lower-budget annotation pipelines for adapting VLMs, reducing labeling effort by >50% in low-shot regimes.
视觉语言模型的半监督少样本自适应 / Semi-Supervised Few-Shot Adaptation of Vision-Language Models
这篇论文提出了一种利用未标注数据来辅助标注的视觉语言模型半监督学习方法,能在医学图像分类等标注成本高的任务中,用极少的标注样本(减少超过50%的标注量)有效提升模型在类别不平衡情况下的性能。
源自 arXiv: 2603.02959