📄 论文总结
概念感知批量采样改进语言-图像预训练 / Concept-Aware Batch Sampling Improves Language-Image Pretraining
1️⃣ 一句话总结
这篇论文提出了一种名为CABS的动态批量采样方法,它能在训练过程中根据目标概念分布智能选择数据,显著提升了视觉-语言模型的性能,无需依赖预先筛选的静态数据集。
What data should a vision-language model be trained on? To answer this question, many data curation efforts center on the quality of a dataset. However, most of these existing methods are (i) offline, i.e. they produce a static dataset from a set of predetermined filtering criteria, and (ii) concept-agnostic, i.e. they use model-based filters which induce additional data biases. In this work, we go beyond such offline, concept-agnostic methods and advocate for more flexible, task-adaptive online concept-based curation. Our first contribution is DataConcept, a collection of 128M web-crawled image-text pairs annotated with fine-grained details about their concept composition. Building on DataConcept, we introduce Concept-Aware Batch Sampling (CABS), a simple yet effective batch sampling framework that flexibly constructs batches on-the-fly based on specific target distributions. We propose two variants: (i) Diversity Maximization (CABS-DM) to curate batches with a broad coverage of available concepts, and (ii) Frequency Maximization (CABS-FM) to curate batches with high object multiplicity. Through extensive evaluations across 28 benchmarks, we demonstrate that our CABS method significantly benefits CLIP/SigLIP model classes and yields highly performant models. Overall, CABS represents a strong open-source alternative to proprietary online data curation algorithms, enabling practitioners to define custom concept distributions that optimize for specific downstream tasks.
概念感知批量采样改进语言-图像预训练 / Concept-Aware Batch Sampling Improves Language-Image Pretraining
这篇论文提出了一种名为CABS的动态批量采样方法,它能在训练过程中根据目标概念分布智能选择数据,显著提升了视觉-语言模型的性能,无需依赖预先筛选的静态数据集。