时间增益,空间代价:重新审视多模态大语言模型中的视频微调 / Temporal Gains, Spatial Costs: Revisiting Video Fine-Tuning in Multimodal Large Language Models
1️⃣ 一句话总结
这项研究发现,使用视频数据对多模态大模型进行微调虽然能提升其对动态视频的理解能力,但往往会损害或无法提升其对静态图像的识别能力,揭示了在联合训练中平衡时空理解的核心挑战。
Multimodal large language models (MLLMs) are typically trained in multiple stages, with video-based supervised fine-tuning (Video-SFT) serving as a key step for improving visual understanding. Yet its effect on the fine-grained evolution of visual capabilities, particularly the balance between spatial and temporal understanding, remains poorly understood. In this paper, we systematically study how Video-SFT reshapes visual capabilities in MLLMs. Across architectures, parameter scales, and frame sampling settings, we observe a consistent pattern: Video-SFT reliably improves video performance, but often yields limited gains or even degradation on static image benchmarks. We further show that this trade-off is closely tied to temporal budget: increasing the number of sampled frames generally improves video performance, but does not reliably improve static image performance. Motivated by this finding, we study an instruction-aware Hybrid-Frame strategy that adaptively allocates frame counts and partially mitigates the image-video trade-off. Our results indicate that Video-SFT is not a free lunch for MLLMs, and preserving spatial understanding remains a central challenge in joint image-video training.
时间增益,空间代价:重新审视多模态大语言模型中的视频微调 / Temporal Gains, Spatial Costs: Revisiting Video Fine-Tuning in Multimodal Large Language Models
这项研究发现,使用视频数据对多模态大模型进行微调虽然能提升其对动态视频的理解能力,但往往会损害或无法提升其对静态图像的识别能力,揭示了在联合训练中平衡时空理解的核心挑战。
源自 arXiv: 2603.17541