SemanticGen:在语义空间中进行视频生成 / SemanticGen: Video Generation in Semantic Space
1️⃣ 一句话总结
这篇论文提出了一种名为SemanticGen的新方法,它通过先在紧凑的语义空间中进行全局规划,再补充细节来生成视频,从而比现有方法收敛更快、计算效率更高,尤其适合生成长视频。
State-of-the-art video generative models typically learn the distribution of video latents in the VAE space and map them to pixels using a VAE decoder. While this approach can generate high-quality videos, it suffers from slow convergence and is computationally expensive when generating long videos. In this paper, we introduce SemanticGen, a novel solution to address these limitations by generating videos in the semantic space. Our main insight is that, due to the inherent redundancy in videos, the generation process should begin in a compact, high-level semantic space for global planning, followed by the addition of high-frequency details, rather than directly modeling a vast set of low-level video tokens using bi-directional attention. SemanticGen adopts a two-stage generation process. In the first stage, a diffusion model generates compact semantic video features, which define the global layout of the video. In the second stage, another diffusion model generates VAE latents conditioned on these semantic features to produce the final output. We observe that generation in the semantic space leads to faster convergence compared to the VAE latent space. Our method is also effective and computationally efficient when extended to long video generation. Extensive experiments demonstrate that SemanticGen produces high-quality videos and outperforms state-of-the-art approaches and strong baselines.
SemanticGen:在语义空间中进行视频生成 / SemanticGen: Video Generation in Semantic Space
这篇论文提出了一种名为SemanticGen的新方法,它通过先在紧凑的语义空间中进行全局规划,再补充细节来生成视频,从而比现有方法收敛更快、计算效率更高,尤其适合生成长视频。
源自 arXiv: 2512.20619