菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-26
📄 Abstract - Do Protective Perturbations Really Protect Portrait Privacy under Real-world Image Transformations?

Proactive defense methods protect portrait images from unauthorized editing or talking face generation (TFG) by introducing pixel-level protective perturbations, and have already attracted increasing attention for privacy protection. In real-world scenarios, images inevitably undergo various transformations during cross-device display and dissemination--such as scale transformations and color compression--that directly alter pixel values. However, it remains unclear whether such pixel-level modifications affect the effectiveness of existing proactive defense methods that rely on pixel-level perturbations. To solve this problem, we conduct a systematic evaluation of representative proactive defenses under image transformation. The evaluated methods are selected to span different generation architectures such as diffusion and GAN-based models, as well as defense scopes covering both portrait and natural images, and are assessed using both qualitative and quantitative metrics for subjective and objective comparison. Experimental results indicate that defense methods based on pixel-level perturbations struggle to withstand common image transformations, posing a risk of defense failure in real-world applications. To further highlight this risk, we propose a simple yet effective purification framework by leveraging the vulnerabilities induced by real-world image transformations. Experimental results demonstrate that the proposed method can efficiently remove protective perturbations with low computational cost, highlighting previously overlooked risks to the research community.

顶级标签: computer vision model evaluation
详细标签: privacy protection adversarial perturbations image transformations defense evaluation purification framework 或 搜索:

真实世界的图像变换下,保护性扰动真的能保护肖像隐私吗? / Do Protective Perturbations Really Protect Portrait Privacy under Real-world Image Transformations?


1️⃣ 一句话总结

本文发现,现有的通过细微像素扰动来防止他人篡改或伪造人脸图像的保护方法,在实际应用中很容易被手机缩放、色彩压缩等常见图像处理操作破坏,从而失效,作者还提出了一种低成本的方法来轻松去除这些扰动,揭示了这类隐私保护技术在实际部署中的重大风险。

源自 arXiv: 2604.23688