你的AI生成图像检测器,如果经过校准,可以秘密地达到最先进的准确率 / Your AI-Generated Image Detector Can Secretly Achieve SOTA Accuracy, If Calibrated
1️⃣ 一句话总结
这篇论文发现现有的AI生成图像检测器在遇到新的生成方法时容易出错,并提出了一种无需重新训练、只需少量验证数据就能自动校准决策边界的方法,从而显著提升了检测器在实际应用中的准确性和鲁棒性。
Despite being trained on balanced datasets, existing AI-generated image detectors often exhibit systematic bias at test time, frequently misclassifying fake images as real. We hypothesize that this behavior stems from distributional shift in fake samples and implicit priors learned during training. Specifically, models tend to overfit to superficial artifacts that do not generalize well across different generation methods, leading to a misaligned decision threshold when faced with test-time distribution shift. To address this, we propose a theoretically grounded post-hoc calibration framework based on Bayesian decision theory. In particular, we introduce a learnable scalar correction to the model's logits, optimized on a small validation set from the target distribution while keeping the backbone frozen. This parametric adjustment compensates for distributional shift in model output, realigning the decision boundary even without requiring ground-truth labels. Experiments on challenging benchmarks show that our approach significantly improves robustness without retraining, offering a lightweight and principled solution for reliable and adaptive AI-generated image detection in the open world. Code is available at this https URL.
你的AI生成图像检测器,如果经过校准,可以秘密地达到最先进的准确率 / Your AI-Generated Image Detector Can Secretly Achieve SOTA Accuracy, If Calibrated
这篇论文发现现有的AI生成图像检测器在遇到新的生成方法时容易出错,并提出了一种无需重新训练、只需少量验证数据就能自动校准决策边界的方法,从而显著提升了检测器在实际应用中的准确性和鲁棒性。
源自 arXiv: 2602.01973