菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-28
📄 Abstract - Reverse Personalization

Recent text-to-image diffusion models have demonstrated remarkable generation of realistic facial images conditioned on textual prompts and human identities, enabling creating personalized facial imagery. However, existing prompt-based methods for removing or modifying identity-specific features rely either on the subject being well-represented in the pre-trained model or require model fine-tuning for specific identities. In this work, we analyze the identity generation process and introduce a reverse personalization framework for face anonymization. Our approach leverages conditional diffusion inversion, allowing direct manipulation of images without using text prompts. To generalize beyond subjects in the model's training data, we incorporate an identity-guided conditioning branch. Unlike prior anonymization methods, which lack control over facial attributes, our framework supports attribute-controllable anonymization. We demonstrate that our method achieves a state-of-the-art balance between identity removal, attribute preservation, and image quality. Source code and data are available at this https URL .

顶级标签: computer vision model training aigc
详细标签: face anonymization diffusion models image editing privacy inversion 或 搜索:

逆向个性化 / Reverse Personalization


1️⃣ 一句话总结

这篇论文提出了一种新方法,能够在不依赖文本描述或针对特定人脸进行模型训练的情况下,直接对图像进行人脸匿名化处理,同时还能灵活控制保留或修改其他面部特征,在保护隐私和保持图像质量之间取得了更好的平衡。

源自 arXiv: 2512.22984