一个概念不止一个词:文本到图像扩散模型中的多样化遗忘 / A Concept is More Than a Word: Diversified Unlearning in Text-to-Image Diffusion Models
1️⃣ 一句话总结
这篇论文提出了一种名为‘多样化遗忘’的新方法,通过使用一组多样化的文本提示来更精确地代表一个概念,从而在文本到图像生成模型中更有效地、更少副作用地‘遗忘’掉有害或不想要的概念,解决了传统仅依赖关键词进行遗忘时容易误删相关内容的局限性。
Concept unlearning has emerged as a promising direction for reducing the risks of harmful content generation in text-to-image diffusion models by selectively erasing undesirable concepts from a model's parameters. Existing approaches typically rely on keywords to identify the target concept to be unlearned. However, we show that this keyword-based formulation is inherently limited: a visual concept is multi-dimensional, can be expressed in diverse textual forms, and often overlap with related concepts in the latent space, making keyword-only unlearning, which imprecisely indicate the target concept is brittle and prone to over-forgetting. This occurs because a single keyword represents only a narrow point estimate of the concept, failing to cover its full semantic distribution and entangled variations in the latent space. To address this limitation, we propose Diversified Unlearning, a distributional framework that represents a concept through a set of contextually diverse prompts rather than a single keyword. This richer representation enables more precise and robust unlearning. Through extensive experiments across multiple benchmarks and state-of-the-art baselines, we demonstrate that integrating Diversified Unlearning as an add-on component into existing unlearning pipelines consistently achieves stronger erasure, better retention of unrelated concepts, and improved robustness against adversarial recovery attacks.
一个概念不止一个词:文本到图像扩散模型中的多样化遗忘 / A Concept is More Than a Word: Diversified Unlearning in Text-to-Image Diffusion Models
这篇论文提出了一种名为‘多样化遗忘’的新方法,通过使用一组多样化的文本提示来更精确地代表一个概念,从而在文本到图像生成模型中更有效地、更少副作用地‘遗忘’掉有害或不想要的概念,解决了传统仅依赖关键词进行遗忘时容易误删相关内容的局限性。
源自 arXiv: 2603.18767