菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Deepfakes at Face Value: Image and Authority

Deepfakes are synthetic media that superimpose or generate someone's likeness on to pre-existing sound, images, or videos using deep learning methods. Existing accounts of the wrongs involved in creating and distributing deepfakes focus on the harms they cause or the non-normative interests they violate. However, these approaches do not explain how deepfakes can be wrongful even when they cause no harm or set back any other non-normative interest. To address this issue, this paper identifies a neglected reason why deepfakes are wrong: they can subvert our legitimate interests in having authority over the permissible uses of our image and the governance of our identity. We argue that deepfakes are wrong when they usurp our authority to determine the provenance of our own agency by exploiting our biometric features as a generative resource. In particular, we have a specific right against the algorithmic conscription of our identity. We refine the scope of this interest by distinguishing between permissible forms of appropriation, such as artistic depiction, from wrongful algorithmic simulation.

顶级标签: multi-modal theory aigc
详细标签: deepfakes ethics identity governance biometric rights algorithmic conscription 或 搜索:

以貌取人:深度伪造图像与身份权威 / Deepfakes at Face Value: Image and Authority


1️⃣ 一句话总结

这篇论文提出,深度伪造之所以错误,核心在于它侵犯了我们对自己形象使用和身份管理的权威,即未经允许将个人生物特征作为生成资源,剥夺了我们决定自身行为来源的权利,而非仅仅因为它造成了实际伤害。

源自 arXiv: 2604.12490