菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - HalDec-Bench: Benchmarking Hallucination Detector in Image Captioning

Hallucination detection in captions (HalDec) assesses a vision-language model's ability to correctly align image content with text by identifying errors in captions that misrepresent the image. Beyond evaluation, effective hallucination detection is also essential for curating high-quality image-caption pairs used to train VLMs. However, the generalizability of VLMs as hallucination detectors across different captioning models and hallucination types remains unclear due to the lack of a comprehensive benchmark. In this work, we introduce HalDec-Bench, a benchmark designed to evaluate hallucination detectors in a principled and interpretable manner. HalDec-Bench contains captions generated by diverse VLMs together with human annotations indicating the presence of hallucinations, detailed hallucination-type categories, and segment-level labels. The benchmark provides tasks with a wide range of difficulty levels and reveals performance differences across models that are not visible in existing multimodal reasoning or alignment benchmarks. Our analysis further uncovers two key findings. First, detectors tend to recognize sentences appearing at the beginning of a response as correct, regardless of their actual correctness. Second, our experiments suggest that dataset noise can be substantially reduced by using strong VLMs as filters while employing recent VLMs as caption generators. Our project page is available at this https URL.

顶级标签: model evaluation benchmark multi-modal
详细标签: hallucination detection vision-language models image captioning dataset curation evaluation benchmark 或 搜索:

HalDec-Bench:图像描述任务中幻觉检测器的基准测试 / HalDec-Bench: Benchmarking Hallucination Detector in Image Captioning


1️⃣ 一句话总结

这篇论文提出了一个名为HalDec-Bench的新基准测试,用于系统评估视觉语言模型在检测图像描述中‘幻觉’(即描述与图像内容不符的错误)的能力,并发现现有检测器倾向于盲目相信描述开头的句子,同时指出可以利用强大的视觉语言模型作为过滤器来有效提升训练数据的质量。

源自 arXiv: 2603.15253