菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-16
📄 Abstract - What Matters in Data Curation for Multimodal Reasoning? Insights from the DCVLR Challenge

We study data curation for multimodal reasoning through the NeurIPS 2025 Data Curation for Vision-Language Reasoning (DCVLR) challenge, which isolates dataset selection by fixing the model and training protocol. Using a compact curated dataset derived primarily from Walton Multimodal Cold Start, our submission placed first in the challenge. Through post-competition ablations, we show that difficulty-based example selection on an aligned base dataset is the dominant driver of performance gains. Increasing dataset size does not reliably improve mean accuracy under the fixed training recipe, but mainly reduces run-to-run variance, while commonly used diversity and synthetic augmentation heuristics provide no additional benefit and often degrade performance. These results characterize DCVLR as a saturation-regime evaluation and highlight the central role of alignment and difficulty in data-efficient multimodal reasoning.

顶级标签: multi-modal data model evaluation
详细标签: data curation vision-language reasoning dataset selection difficulty-based sampling benchmark analysis 或 搜索:

多模态推理的数据策展中什么因素至关重要?来自DCVLR挑战的启示 / What Matters in Data Curation for Multimodal Reasoning? Insights from the DCVLR Challenge


1️⃣ 一句话总结

这项研究发现,在多模态推理任务中,从已对齐的基础数据集中精心挑选难度适中的样本,是提升模型性能的最关键因素,而单纯增加数据量或使用常见的多样性增强方法效果有限甚至有害。

源自 arXiv: 2601.10922