菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - Why Does RL Generalize Better Than SFT? A Data-Centric Perspective on VLM Post-Training

The adaptation of large-scale Vision-Language Models (VLMs) through post-training reveals a pronounced generalization gap: models fine-tuned with Reinforcement Learning (RL) consistently achieve superior out-of-distribution (OOD) performance compared to those trained with Supervised Fine-Tuning (SFT). This paper posits a data-centric explanation for this phenomenon, contending that RL's generalization advantage arises from an implicit data filtering mechanism that inherently prioritizes medium-difficulty training samples. To test this hypothesis, we systematically evaluate the OOD generalization of SFT models across training datasets of varying difficulty levels. Our results confirm that data difficulty is a critical factor, revealing that training on hard samples significantly degrades OOD performance. Motivated by this finding, we introduce Difficulty-Curated SFT (DC-SFT), a straightforward method that explicitly filters the training set based on sample difficulty. Experiments show that DC-SFT not only substantially enhances OOD generalization over standard SFT, but also surpasses the performance of RL-based training, all while providing greater stability and computational efficiency. This work offers a data-centric account of the OOD generalization gap in VLMs and establishes a more efficient pathway to achieving robust generalization. Code is available at this https URL.

顶级标签: model training multi-modal machine learning
详细标签: vision-language models reinforcement learning supervised fine-tuning out-of-distribution generalization data difficulty 或 搜索:

为什么强化学习比监督微调泛化得更好?从数据中心的视角看视觉语言模型的后训练 / Why Does RL Generalize Better Than SFT? A Data-Centric Perspective on VLM Post-Training


1️⃣ 一句话总结

这篇论文发现,在视觉语言模型的后训练中,强化学习比监督微调泛化能力更好的原因在于它隐式地筛选了中等难度的训练数据,并据此提出了一种通过显式筛选数据难度来提升模型泛化能力且更高效稳定的新方法。

源自 arXiv: 2602.10815