菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-30
📄 Abstract - Auditing Frontier Vision-Language Models for Trustworthy Medical VQA: Grounding Failures, Format Collapse, and Domain Adaptation

Deploying vision-language models (VLMs) in clinical settings demands auditable behavior under realistic failure conditions, yet the failure landscape of frontier VLMs on specialized medical inputs is poorly characterized. We audit five recent frontier and grounding-aware VLMs (Gemini~2.5~Pro, GPT-5, o3, GLM-4.5V, Qwen~2.5~VL) on Medical VQA along two trust-relevant axes. Perception: all models localize anatomical and pathological targets poorly -- the best model reaches only 0.23 mean IoU and 19.1% Acc@0.5 -- and exhibit clinically dangerous laterality confusion. Pipeline integration: a self-grounding pipeline, where the same model localizes then answers, degrades VQA accuracy for every model -- driven by both inaccurate localization and format-compliance failures under the two-step prompt (parse failure rises to 70%--99% for Gemini and GPT-5 on VQA-RAD). Replacing predicted boxes with ground-truth annotations recovers and improves VQA accuracy, consistent with the failure residing in the perception module rather than in the decomposition itself. These observational findings identify grounding quality as a primary trustworthiness bottleneck in our SLAKE bounding-box setting. As a complementary fine-tuning follow-up, supervised fine-tuning of Qwen~2.5~VL on combined Med-VQA training data attains the highest reported SLAKE open-ended recall (85.5%) among comparable methods, suggesting that the VQA-level gap is tractable with domain adaptation; whether this also closes the perception/trustworthiness bottleneck is left to future work.

顶级标签: medical multi-modal model evaluation
详细标签: vision-language models medical vqa grounding failures domain adaptation trustworthiness 或 搜索:

对前沿视觉-语言模型在可信医疗视觉问答中的审计:定位失败、格式崩溃与领域适配 / Auditing Frontier Vision-Language Models for Trustworthy Medical VQA: Grounding Failures, Format Collapse, and Domain Adaptation


1️⃣ 一句话总结

本研究系统测试了多个顶尖AI模型在医疗图像问答任务中的可靠性,发现模型在识别身体部位和病变位置时表现很差(准确率不足20%),且当模型需要先定位再回答时,由于格式错误和定位不准,回答质量反而下降,但通过专项训练可以显著提升模型的表现。

源自 arXiv: 2604.27720