菜单

🤖 系统
📄 Abstract - Semantics Lead the Way: Harmonizing Semantic and Texture Modeling with Asynchronous Latent Diffusion

Latent Diffusion Models (LDMs) inherently follow a coarse-to-fine generation process, where high-level semantic structure is generated slightly earlier than fine-grained texture. This indicates the preceding semantics potentially benefit texture generation by providing a semantic anchor. Recent advances have integrated semantic priors from pretrained visual encoders to further enhance LDMs, yet they still denoise semantic and VAE-encoded texture synchronously, neglecting such ordering. Observing these, we propose Semantic-First Diffusion (SFD), a latent diffusion paradigm that explicitly prioritizes semantic formation. SFD first constructs composite latents by combining a compact semantic latent, which is extracted from a pretrained visual encoder via a dedicated Semantic VAE, with the texture latent. The core of SFD is to denoise the semantic and texture latents asynchronously using separate noise schedules: semantics precede textures by a temporal offset, providing clearer high-level guidance for texture refinement and enabling natural coarse-to-fine generation. On ImageNet 256x256 with guidance, SFD achieves FID 1.06 (LightningDiT-XL) and FID 1.04 (1.0B LightningDiT-XXL), while achieving up to 100x faster convergence than the original DiT. SFD also improves existing methods like ReDi and VA-VAE, demonstrating the effectiveness of asynchronous, semantics-led modeling. Project page and code: this https URL.

顶级标签: computer vision model training multi-modal
详细标签: latent diffusion semantic modeling texture generation asynchronous denoising image synthesis 或 搜索:

语义先行:通过异步潜在扩散协调语义与纹理建模 / Semantics Lead the Way: Harmonizing Semantic and Texture Modeling with Asynchronous Latent Diffusion


1️⃣ 一句话总结

这篇论文提出了一种名为“语义优先扩散”的新方法,通过让AI图像生成模型先明确生成图像的整体语义结构,再基于此细化纹理细节,从而实现了更高质量、更快速且更符合人类认知过程的图像生成。


📄 打开原文 PDF