菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - LocateEdit-Bench: A Benchmark for Instruction-Based Editing Localization

Recent advancements in image editing have enabled highly controllable and semantically-aware alteration of visual content, posing unprecedented challenges to manipulation localization. However, existing AI-generated forgery localization methods primarily focus on inpainting-based manipulations, making them ineffective against the latest instruction-based editing paradigms. To bridge this critical gap, we propose LocateEdit-Bench, a large-scale dataset comprising $231$K edited images, designed specifically to benchmark localization methods against instruction-driven image editing. Our dataset incorporates four cutting-edge editing models and covers three common edit types. We conduct a detailed analysis of the dataset and develop two multi-metric evaluation protocols to assess existing localization methods. Our work establishes a foundation to keep pace with the evolving landscape of image editing, thereby facilitating the development of effective methods for future forgery localization. Dataset will be open-sourced upon acceptance.

顶级标签: computer vision benchmark data
详细标签: image editing localization forgery detection instruction-based editing dataset evaluation protocol 或 搜索:

LocateEdit-Bench:一个基于指令编辑定位的基准数据集 / LocateEdit-Bench: A Benchmark for Instruction-Based Editing Localization


1️⃣ 一句话总结

这篇论文提出了一个专门用于评估图像篡改定位方法的新基准数据集,该数据集包含大量由先进指令编辑模型生成的篡改图像,旨在解决现有方法无法有效检测最新指令式图像编辑篡改的问题。

源自 arXiv: 2602.05577