基于非平衡最优传输的一步生成模型遗忘学习 / Unlearning for One-Step Generative Models via Unbalanced Optimal Transport
1️⃣ 一句话总结
本文提出了一种名为UOT-Unlearn的新方法,通过非平衡最优传输技术,让高效的一步生成模型能够安全地‘忘记’特定类别(例如敏感内容),同时保持整体图像生成质量,解决了现有遗忘方法不适用于此类快速模型的问题。
Recent advances in one-step generative frameworks, such as flow map models, have significantly improved the efficiency of image generation by learning direct noise-to-data mappings in a single forward pass. However, machine unlearning for ensuring the safety of these powerful generators remains entirely unexplored. Existing diffusion unlearning methods are inherently incompatible with these one-step models, as they rely on a multi-step iterative denoising process. In this work, we propose UOT-Unlearn, a novel plug-and-play class unlearning framework for one-step generative models based on the Unbalanced Optimal Transport (UOT). Our method formulates unlearning as a principled trade-off between a forget cost, which suppresses the target class, and an $f$-divergence penalty, which preserves overall generation fidelity via relaxed marginal constraints. By leveraging UOT, our method enables the probability mass of the forgotten class to be smoothly redistributed to the remaining classes, rather than collapsing into low-quality or noise-like samples. Experimental results on CIFAR-10 and ImageNet-256 demonstrate that our framework achieves superior unlearning success (PUL) and retention quality (u-FID), significantly outperforming baselines.
基于非平衡最优传输的一步生成模型遗忘学习 / Unlearning for One-Step Generative Models via Unbalanced Optimal Transport
本文提出了一种名为UOT-Unlearn的新方法,通过非平衡最优传输技术,让高效的一步生成模型能够安全地‘忘记’特定类别(例如敏感内容),同时保持整体图像生成质量,解决了现有遗忘方法不适用于此类快速模型的问题。
源自 arXiv: 2603.16489