重新思考组合图像检索评估:一个源自图像编辑的细粒度基准 / Rethinking Composed Image Retrieval Evaluation: A Fine-Grained Benchmark from Image Editing
1️⃣ 一句话总结
这篇论文通过利用图像编辑技术构建了一个名为EDIR的、涵盖广泛类别和细粒度修改类型的新型组合图像检索基准,揭示了当前先进模型在该任务上存在显著能力缺陷,并指出了现有评估方法的局限性。
Composed Image Retrieval (CIR) is a pivotal and complex task in multimodal understanding. Current CIR benchmarks typically feature limited query categories and fail to capture the diverse requirements of real-world scenarios. To bridge this evaluation gap, we leverage image editing to achieve precise control over modification types and content, enabling a pipeline for synthesizing queries across a broad spectrum of categories. Using this pipeline, we construct EDIR, a novel fine-grained CIR benchmark. EDIR encompasses 5,000 high-quality queries structured across five main categories and fifteen subcategories. Our comprehensive evaluation of 13 multimodal embedding models reveals a significant capability gap; even state-of-the-art models (e.g., RzenEmbed and GME) struggle to perform consistently across all subcategories, highlighting the rigorous nature of our benchmark. Through comparative analysis, we further uncover inherent limitations in existing benchmarks, such as modality biases and insufficient categorical coverage. Furthermore, an in-domain training experiment demonstrates the feasibility of our benchmark. This experiment clarifies the task challenges by distinguishing between categories that are solvable with targeted data and those that expose intrinsic limitations of current model architectures.
重新思考组合图像检索评估:一个源自图像编辑的细粒度基准 / Rethinking Composed Image Retrieval Evaluation: A Fine-Grained Benchmark from Image Editing
这篇论文通过利用图像编辑技术构建了一个名为EDIR的、涵盖广泛类别和细粒度修改类型的新型组合图像检索基准,揭示了当前先进模型在该任务上存在显著能力缺陷,并指出了现有评估方法的局限性。
源自 arXiv: 2601.16125