BiFM:用于少步图像编辑与生成的双向流匹配 / BiFM: Bidirectional Flow Matching for Few-Step Image Editing and Generation
1️⃣ 一句话总结
这篇论文提出了一个名为BiFM的新型双向流匹配框架,它通过在一个模型中同时学习图像生成和图像逆向还原,解决了现有AI图像编辑方法在快速(少步)处理时质量下降的问题,从而实现了更高质量、更灵活的快速图像编辑与生成。
Recent diffusion and flow matching models have demonstrated strong capabilities in image generation and editing by progressively removing noise through iterative sampling. While this enables flexible inversion for semantic-preserving edits, few-step sampling regimes suffer from poor forward process approximation, leading to degraded editing quality. Existing few-step inversion methods often rely on pretrained generators and auxiliary modules, limiting scalability and generalization across different architectures. To address these limitations, we propose BiFM (Bidirectional Flow Matching), a unified framework that jointly learns generation and inversion within a single model. BiFM directly estimates average velocity fields in both ``image $\to$ noise" and ``noise $\to$ image" directions, constrained by a shared instantaneous velocity field derived from either predefined schedules or pretrained multi-step diffusion models. Additionally, BiFM introduces a novel training strategy using continuous time-interval supervision, stabilized by a bidirectional consistency objective and a lightweight time-interval embedding. This bidirectional formulation also enables one-step inversion and can integrate seamlessly into popular diffusion and flow matching backbones. Across diverse image editing and generation tasks, BiFM consistently outperforms existing few-step approaches, achieving superior performance and editability.
BiFM:用于少步图像编辑与生成的双向流匹配 / BiFM: Bidirectional Flow Matching for Few-Step Image Editing and Generation
这篇论文提出了一个名为BiFM的新型双向流匹配框架,它通过在一个模型中同时学习图像生成和图像逆向还原,解决了现有AI图像编辑方法在快速(少步)处理时质量下降的问题,从而实现了更高质量、更灵活的快速图像编辑与生成。
源自 arXiv: 2603.24942