去噪,快与慢:面向图像生成的难度感知自适应采样 / Denoising, Fast and Slow: Difficulty-Aware Adaptive Sampling for Image Generation
1️⃣ 一句话总结
本文提出了一种名为Patch Forcing的图像生成方法,通过让模型在生成图像时,对不同区域(如简单背景和复杂物体)采用不同的去噪速度,优先处理简单区域以帮助处理复杂区域,从而在不增加计算量的情况下提升图像质量,并在多个任务上取得更优结果。
Diffusion- and flow-based models usually allocate compute uniformly across space, updating all patches with the same timestep and number of function evaluations. While convenient, this ignores the heterogeneity of natural images: some regions are easy to denoise, whereas others benefit from more refinement or additional context. Motivated by this, we explore patch-level noise scales for image synthesis. We find that naively varying timesteps across image tokens performs poorly, as it exposes the model to overly informative training states that do not occur at inference. We therefore introduce a timestep sampler that explicitly controls the maximum patch-level information available during training, and show that moving from global to patch-level timesteps already improves image generation over standard baselines. By further augmenting the model with a lightweight per-patch difficulty head, we enable adaptive samplers that allocate compute dynamically where it is most needed. Combined with noise levels varying over both space and diffusion time, this yields Patch Forcing (PF), a framework that advances easier regions earlier so they can provide context for harder ones. PF achieves superior results on class-conditional ImageNet, remains orthogonal to representation alignment and guidance methods, and scales to text-to-image synthesis. Our results suggest that patch-level denoising schedules provide a promising foundation for adaptive image generation.
去噪,快与慢:面向图像生成的难度感知自适应采样 / Denoising, Fast and Slow: Difficulty-Aware Adaptive Sampling for Image Generation
本文提出了一种名为Patch Forcing的图像生成方法,通过让模型在生成图像时,对不同区域(如简单背景和复杂物体)采用不同的去噪速度,优先处理简单区域以帮助处理复杂区域,从而在不增加计算量的情况下提升图像质量,并在多个任务上取得更优结果。
源自 arXiv: 2604.19141