菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-09
📄 Abstract - Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance

We present Wan-Move, a simple and scalable framework that brings motion control to video generative models. Existing motion-controllable methods typically suffer from coarse control granularity and limited scalability, leaving their outputs insufficient for practical use. We narrow this gap by achieving precise and high-quality motion control. Our core idea is to directly make the original condition features motion-aware for guiding video synthesis. To this end, we first represent object motions with dense point trajectories, allowing fine-grained control over the scene. We then project these trajectories into latent space and propagate the first frame's features along each trajectory, producing an aligned spatiotemporal feature map that tells how each scene element should move. This feature map serves as the updated latent condition, which is naturally integrated into the off-the-shelf image-to-video model, e.g., Wan-I2V-14B, as motion guidance without any architecture change. It removes the need for auxiliary motion encoders and makes fine-tuning base models easily scalable. Through scaled training, Wan-Move generates 5-second, 480p videos whose motion controllability rivals Kling 1.5 Pro's commercial Motion Brush, as indicated by user studies. To support comprehensive evaluation, we further design MoveBench, a rigorously curated benchmark featuring diverse content categories and hybrid-verified annotations. It is distinguished by larger data volume, longer video durations, and high-quality motion annotations. Extensive experiments on MoveBench and the public dataset consistently show Wan-Move's superior motion quality. Code, models, and benchmark data are made publicly available.

顶级标签: video generation aigc model training
详细标签: motion control video synthesis latent guidance trajectory benchmark 或 搜索:

万动:基于潜在轨迹引导的运动可控视频生成 / Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance


1️⃣ 一句话总结

这篇论文提出了一个名为Wan-Move的新框架,它通过将物体运动的密集轨迹直接映射到视频生成模型的潜在空间,实现了对视频中物体运动的精细、高质量控制,并且无需改变现有模型架构,就能生成长达5秒的流畅可控视频。


源自 arXiv: 2512.08765