评估图像到图像人像编辑中的人口统计学误表征 / Evaluating Demographic Misrepresentation in Image-to-Image Portrait Editing
1️⃣ 一句话总结
这篇论文研究发现,当前流行的图像编辑AI工具在处理不同种族、性别和年龄的人像时,会系统性地出现编辑效果减弱或强加刻板印象的偏见问题,并提出了一个无需修改模型即可显著减少对少数群体偏见的简单解决方案。
Demographic bias in text-to-image (T2I) generation is well studied, yet demographic-conditioned failures in instruction-guided image-to-image (I2I) editing remain underexplored. We examine whether identical edit instructions yield systematically different outcomes across subject demographics in open-weight I2I editors. We formalize two failure modes: Soft Erasure, where edits are silently weakened or ignored in the output image, and Stereotype Replacement, where edits introduce unrequested, stereotype-consistent attributes. We introduce a controlled benchmark that probes demographic-conditioned behavior by generating and editing portraits conditioned on race, gender, and age using a diagnostic prompt set, and evaluate multiple editors with vision-language model (VLM) scoring and human evaluation. Our analysis shows that identity preservation failures are pervasive, demographically uneven, and shaped by implicit social priors, including occupation-driven gender inference. Finally, we demonstrate that a prompt-level identity constraint, without model updates, can substantially reduce demographic change for minority groups while leaving majority-group portraits largely unchanged, revealing asymmetric identity priors in current editors. Together, our findings establish identity preservation as a central and demographically uneven failure mode in I2I editing and motivate demographic-robust editing systems. Project page: this https URL
评估图像到图像人像编辑中的人口统计学误表征 / Evaluating Demographic Misrepresentation in Image-to-Image Portrait Editing
这篇论文研究发现,当前流行的图像编辑AI工具在处理不同种族、性别和年龄的人像时,会系统性地出现编辑效果减弱或强加刻板印象的偏见问题,并提出了一个无需修改模型即可显著减少对少数群体偏见的简单解决方案。
源自 arXiv: 2602.16149