菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-17
📄 Abstract - Robust and Calibrated Detection of Authentic Multimedia Content

Generative models can synthesize highly realistic content, so-called deepfakes, that are already being misused at scale to undermine digital media authenticity. Current deepfake detection methods are unreliable for two reasons: (i) distinguishing inauthentic content post-hoc is often impossible (e.g., with memorized samples), leading to an unbounded false positive rate (FPR); and (ii) detection lacks robustness, as adversaries can adapt to known detectors with near-perfect accuracy using minimal computational resources. To address these limitations, we propose a resynthesis framework to determine if a sample is authentic or if its authenticity can be plausibly denied. We make two key contributions focusing on the high-precision, low-recall setting against efficient (i.e., compute-restricted) adversaries. First, we demonstrate that our calibrated resynthesis method is the most reliable approach for verifying authentic samples while maintaining controllable, low FPRs. Second, we show that our method achieves adversarial robustness against efficient adversaries, whereas prior methods are easily evaded under identical compute budgets. Our approach supports multiple modalities and leverages state-of-the-art inversion techniques.

顶级标签: multi-modal model evaluation computer vision
详细标签: deepfake detection adversarial robustness content authentication false positive rate generative models 或 搜索:

鲁棒且可校准的真实多媒体内容检测 / Robust and Calibrated Detection of Authentic Multimedia Content


1️⃣ 一句话总结

本文提出了一种基于重合成框架的新方法,用于在对抗性攻击下可靠地检测真实多媒体内容,并通过校准控制误报率,解决了现有深度伪造检测技术易被规避和误报率高的问题。


源自 arXiv: 2512.15182