DiP:在像素空间中驯服扩散模型 / DiP: Taming Diffusion Models in Pixel Space
1️⃣ 一句话总结
这篇论文提出了一种名为DiP的新型高效像素空间扩散模型框架,它通过将图像生成过程分解为全局结构构建和局部细节修复两个协同阶段,在无需依赖压缩编码器的情况下,实现了与潜在扩散模型相当的生成质量和计算效率,显著提升了高分辨率图像合成的速度。
Diffusion models face a fundamental trade-off between generation quality and computational efficiency. Latent Diffusion Models (LDMs) offer an efficient solution but suffer from potential information loss and non-end-to-end training. In contrast, existing pixel space models bypass VAEs but are computationally prohibitive for high-resolution synthesis. To resolve this dilemma, we propose DiP, an efficient pixel space diffusion framework. DiP decouples generation into a global and a local stage: a Diffusion Transformer (DiT) backbone operates on large patches for efficient global structure construction, while a co-trained lightweight Patch Detailer Head leverages contextual features to restore fine-grained local details. This synergistic design achieves computational efficiency comparable to LDMs without relying on a VAE. DiP is accomplished with up to 10$\times$ faster inference speeds than previous method while increasing the total number of parameters by only 0.3%, and achieves an 1.79 FID score on ImageNet 256$\times$256.
DiP:在像素空间中驯服扩散模型 / DiP: Taming Diffusion Models in Pixel Space
这篇论文提出了一种名为DiP的新型高效像素空间扩散模型框架,它通过将图像生成过程分解为全局结构构建和局部细节修复两个协同阶段,在无需依赖压缩编码器的情况下,实现了与潜在扩散模型相当的生成质量和计算效率,显著提升了高分辨率图像合成的速度。