菜单

🤖 系统
📄 Abstract - DiffSeg30k: A Multi-Turn Diffusion Editing Benchmark for Localized AIGC Detection

Diffusion-based editing enables realistic modification of local image regions, making AI-generated content harder to detect. Existing AIGC detection benchmarks focus on classifying entire images, overlooking the localization of diffusion-based edits. We introduce DiffSeg30k, a publicly available dataset of 30k diffusion-edited images with pixel-level annotations, designed to support fine-grained detection. DiffSeg30k features: 1) In-the-wild images--we collect images or image prompts from COCO to reflect real-world content diversity; 2) Diverse diffusion models--local edits using eight SOTA diffusion models; 3) Multi-turn editing--each image undergoes up to three sequential edits to mimic real-world sequential editing; and 4) Realistic editing scenarios--a vision-language model (VLM)-based pipeline automatically identifies meaningful regions and generates context-aware prompts covering additions, removals, and attribute changes. DiffSeg30k shifts AIGC detection from binary classification to semantic segmentation, enabling simultaneous localization of edits and identification of the editing models. We benchmark three baseline segmentation approaches, revealing significant challenges in semantic segmentation tasks, particularly concerning robustness to image distortions. Experiments also reveal that segmentation models, despite being trained for pixel-level localization, emerge as highly reliable whole-image classifiers of diffusion edits, outperforming established forgery classifiers while showing great potential in cross-generator generalization. We believe DiffSeg30k will advance research in fine-grained localization of AI-generated content by demonstrating the promise and limitations of segmentation-based methods. DiffSeg30k is released at: this https URL

顶级标签: computer vision aigc benchmark
详细标签: image segmentation diffusion models localized editing ai-generated content detection multi-turn editing 或 搜索:

📄 论文总结

DiffSeg30k:一个用于局部AIGC检测的多轮扩散编辑基准数据集 / DiffSeg30k: A Multi-Turn Diffusion Editing Benchmark for Localized AIGC Detection


1️⃣ 一句话总结

这篇论文提出了一个包含3万张扩散编辑图像的数据集DiffSeg30k,将AI生成内容检测从简单的图像分类提升到像素级定位,帮助更精确地识别和定位被AI修改的图像区域。


📄 打开原文 PDF