菜单

🤖 系统
📄 Abstract - Terminal Velocity Matching

We propose Terminal Velocity Matching (TVM), a generalization of flow matching that enables high-fidelity one- and few-step generative modeling. TVM models the transition between any two diffusion timesteps and regularizes its behavior at its terminal time rather than at the initial time. We prove that TVM provides an upper bound on the $2$-Wasserstein distance between data and model distributions when the model is Lipschitz continuous. However, since Diffusion Transformers lack this property, we introduce minimal architectural changes that achieve stable, single-stage training. To make TVM efficient in practice, we develop a fused attention kernel that supports backward passes on Jacobian-Vector Products, which scale well with transformer architectures. On ImageNet-256x256, TVM achieves 3.29 FID with a single function evaluation (NFE) and 1.99 FID with 4 NFEs. It similarly achieves 4.32 1-NFE FID and 2.94 4-NFE FID on ImageNet-512x512, representing state-of-the-art performance for one/few-step models from scratch.

顶级标签: model training computer vision machine learning
详细标签: flow matching diffusion models generative modeling transformer architecture image generation 或 搜索:

📄 论文总结

终端速度匹配 / Terminal Velocity Matching


1️⃣ 一句话总结

这项研究提出了一种名为终端速度匹配的新方法,通过优化扩散模型在生成结束时的行为,实现了仅需1到4步就能生成高质量图像,在ImageNet数据集上取得了当前最优的单步/少步生成效果。


📄 打开原文 PDF