📄 论文总结
Reg-DPO:利用GT-Pair和SFT正则化直接偏好优化以提升视频生成质量 / Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation
1️⃣ 一句话总结
本文提出了一种无需人工标注、能自动构建高质量训练数据并提升训练稳定性的视频生成优化方法,通过结合真实视频与生成视频构建对比样本并引入正则化技术,显著提高了视频生成的质量和效率。
Recent studies have identified Direct Preference Optimization (DPO) as an efficient and reward-free approach to improving video generation quality. However, existing methods largely follow image-domain paradigms and are mainly developed on small-scale models (approximately 2B parameters), limiting their ability to address the unique challenges of video tasks, such as costly data construction, unstable training, and heavy memory consumption. To overcome these limitations, we introduce a GT-Pair that automatically builds high-quality preference pairs by using real videos as positives and model-generated videos as negatives, eliminating the need for any external annotation. We further present Reg-DPO, which incorporates the SFT loss as a regularization term into the DPO loss to enhance training stability and generation fidelity. Additionally, by combining the FSDP framework with multiple memory optimization techniques, our approach achieves nearly three times higher training capacity than using FSDP alone. Extensive experiments on both I2V and T2V tasks across multiple datasets demonstrate that our method consistently outperforms existing approaches, delivering superior video generation quality.
Reg-DPO:利用GT-Pair和SFT正则化直接偏好优化以提升视频生成质量 / Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation
本文提出了一种无需人工标注、能自动构建高质量训练数据并提升训练稳定性的视频生成优化方法,通过结合真实视频与生成视频构建对比样本并引入正则化技术,显著提高了视频生成的质量和效率。