基于合成描述的多模态微调 / MultiModal Fine-tuning with Synthetic Captions
1️⃣ 一句话总结
这篇论文提出了一种新方法,利用多模态大语言模型为图像生成高质量的合成描述,将原本只有图像的单模态数据集转化为图文并茂的多模态数据集,从而在模型微调阶段充分利用了预训练阶段学到的多模态知识,显著提升了图像分类,尤其是小样本学习场景下的性能。
In this paper, we address a fundamental gap between pre-training and fine-tuning of deep neural networks: while pre-training has shifted from unimodal to multimodal learning with enhanced visual understanding, fine-tuning predominantly remains unimodal, limiting the benefits of rich pre-trained representations. To bridge this gap, we propose a novel approach that transforms unimodal datasets into multimodal ones using Multimodal Large Language Models (MLLMs) to generate synthetic image captions for fine-tuning models with a multimodal objective. Our method employs carefully designed prompts incorporating class labels and domain context to produce high-quality captions tailored for classification tasks. Furthermore, we introduce a supervised contrastive loss function that explicitly encourages clustering of same-class representations during fine-tuning, along with a new inference technique that leverages class-averaged text embeddings from multiple synthetic captions per image. Extensive experiments across 13 image classification benchmarks demonstrate that our approach outperforms baseline methods, with particularly significant improvements in few-shot learning scenarios. Our work establishes a new paradigm for dataset enhancement that effectively bridges the gap between multimodal pre-training and fine-tuning. Our code is available at this https URL.
基于合成描述的多模态微调 / MultiModal Fine-tuning with Synthetic Captions
这篇论文提出了一种新方法,利用多模态大语言模型为图像生成高质量的合成描述,将原本只有图像的单模态数据集转化为图文并茂的多模态数据集,从而在模型微调阶段充分利用了预训练阶段学到的多模态知识,显著提升了图像分类,尤其是小样本学习场景下的性能。
源自 arXiv: 2601.21426