菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-11
📄 Abstract - Bidirectional Normalizing Flow: From Data to Noise and Back

Normalizing Flows (NFs) have been established as a principled framework for generative modeling. Standard NFs consist of a forward process and a reverse process: the forward process maps data to noise, while the reverse process generates samples by inverting it. Typical NF forward transformations are constrained by explicit invertibility, ensuring that the reverse process can serve as their exact analytic inverse. Recent developments in TARFlow and its variants have revitalized NF methods by combining Transformers and autoregressive flows, but have also exposed causal decoding as a major bottleneck. In this work, we introduce Bidirectional Normalizing Flow ($\textbf{BiFlow}$), a framework that removes the need for an exact analytic inverse. BiFlow learns a reverse model that approximates the underlying noise-to-data inverse mapping, enabling more flexible loss functions and architectures. Experiments on ImageNet demonstrate that BiFlow, compared to its causal decoding counterpart, improves generation quality while accelerating sampling by up to two orders of magnitude. BiFlow yields state-of-the-art results among NF-based methods and competitive performance among single-evaluation ("1-NFE") methods. Following recent encouraging progress on NFs, we hope our work will draw further attention to this classical paradigm.

顶级标签: machine learning model training theory
详细标签: normalizing flows generative modeling inverse mapping image generation sampling efficiency 或 搜索:

双向归一化流:从数据到噪声再返回 / Bidirectional Normalizing Flow: From Data to Noise and Back


1️⃣ 一句话总结

这篇论文提出了一种名为双向归一化流的新框架,它通过放弃对模型精确可逆性的严格要求,允许使用更灵活的架构和损失函数来近似学习从噪声到数据的反向映射,从而在图像生成任务上实现了更高的生成质量和快达两个数量级的采样速度。


源自 arXiv: 2512.10953