YingVideo-MV:音乐驱动的多阶段视频生成 / YingVideo-MV: Music-Driven Multi-Stage Video Generation
1️⃣ 一句话总结
这篇论文提出了首个音乐驱动的长视频生成框架YingVideo-MV,它通过分析音乐语义、规划镜头、并控制摄像机运动,能自动合成出与音乐节奏和情感高度同步的高质量音乐表演视频。
While diffusion model for audio-driven avatar video generation have achieved notable process in synthesizing long sequences with natural audio-visual synchronization and identity consistency, the generation of music-performance videos with camera motions remains largely unexplored. We present YingVideo-MV, the first cascaded framework for music-driven long-video generation. Our approach integrates audio semantic analysis, an interpretable shot planning module (MV-Director), temporal-aware diffusion Transformer architectures, and long-sequence consistency modeling to enable automatic synthesis of high-quality music performance videos from audio signals. We construct a large-scale Music-in-the-Wild Dataset by collecting web data to support the achievement of diverse, high-quality results. Observing that existing long-video generation methods lack explicit camera motion control, we introduce a camera adapter module that embeds camera poses into latent noise. To enhance continulity between clips during long-sequence inference, we further propose a time-aware dynamic window range strategy that adaptively adjust denoising ranges based on audio embedding. Comprehensive benchmark tests demonstrate that YingVideo-MV achieves outstanding performance in generating coherent and expressive music videos, and enables precise music-motion-camera synchronization. More videos are available in our project page: this https URL .
YingVideo-MV:音乐驱动的多阶段视频生成 / YingVideo-MV: Music-Driven Multi-Stage Video Generation
这篇论文提出了首个音乐驱动的长视频生成框架YingVideo-MV,它通过分析音乐语义、规划镜头、并控制摄像机运动,能自动合成出与音乐节奏和情感高度同步的高质量音乐表演视频。