视频生成中的运动归因 / Motion Attribution for Video Generation
1️⃣ 一句话总结
这篇论文提出了一个名为Motive的框架,它能找出训练数据中哪些视频片段对AI生成视频的‘运动效果’影响最大,并利用这些发现来优化数据选择,从而让生成的视频动作更流畅、更符合物理规律。
Despite the rapid progress of video generation models, the role of data in influencing motion is poorly understood. We present Motive (MOTIon attribution for Video gEneration), a motion-centric, gradient-based data attribution framework that scales to modern, large, high-quality video datasets and models. We use this to study which fine-tuning clips improve or degrade temporal dynamics. Motive isolates temporal dynamics from static appearance via motion-weighted loss masks, yielding efficient and scalable motion-specific influence computation. On text-to-video models, Motive identifies clips that strongly affect motion and guides data curation that improves temporal consistency and physical plausibility. With Motive-selected high-influence data, our method improves both motion smoothness and dynamic degree on VBench, achieving a 74.1% human preference win rate compared with the pretrained base model. To our knowledge, this is the first framework to attribute motion rather than visual appearance in video generative models and to use it to curate fine-tuning data.
视频生成中的运动归因 / Motion Attribution for Video Generation
这篇论文提出了一个名为Motive的框架,它能找出训练数据中哪些视频片段对AI生成视频的‘运动效果’影响最大,并利用这些发现来优化数据选择,从而让生成的视频动作更流畅、更符合物理规律。
源自 arXiv: 2601.08828