Glance:用1个样本加速扩散模型 / Glance: Accelerating Diffusion Models with 1 Sample
1️⃣ 一句话总结
这篇论文提出了一种名为Glance的智能加速方法,它通过为扩散模型的不同生成阶段配备两个轻量级的LoRA适配器(一个用于慢速的语义阶段,一个用于快速的重构阶段),仅需1个样本、1小时即可完成训练,就能实现高达5倍的推理加速,同时保持良好的图像质量和泛化能力。
Diffusion models have achieved remarkable success in image generation, yet their deployment remains constrained by the heavy computational cost and the need for numerous inference steps. Previous efforts on fewer-step distillation attempt to skip redundant steps by training compact student models, yet they often suffer from heavy retraining costs and degraded generalization. In this work, we take a different perspective: we accelerate smartly, not evenly, applying smaller speedups to early semantic stages and larger ones to later redundant phases. We instantiate this phase-aware strategy with two experts that specialize in slow and fast denoising phases. Surprisingly, instead of investing massive effort in retraining student models, we find that simply equipping the base model with lightweight LoRA adapters achieves both efficient acceleration and strong generalization. We refer to these two adapters as Slow-LoRA and Fast-LoRA. Through extensive experiments, our method achieves up to 5 acceleration over the base model while maintaining comparable visual quality across diverse benchmarks. Remarkably, the LoRA experts are trained with only 1 samples on a single V100 within one hour, yet the resulting models generalize strongly on unseen prompts.
Glance:用1个样本加速扩散模型 / Glance: Accelerating Diffusion Models with 1 Sample
这篇论文提出了一种名为Glance的智能加速方法,它通过为扩散模型的不同生成阶段配备两个轻量级的LoRA适配器(一个用于慢速的语义阶段,一个用于快速的重构阶段),仅需1个样本、1小时即可完成训练,就能实现高达5倍的推理加速,同时保持良好的图像质量和泛化能力。