菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-15
📄 Abstract - Med-CAM: Minimal Evidence for Explaining Medical Decision Making

Reliable and interpretable decision-making is essential in medical imaging, where diagnostic outcomes directly influence patient care. Despite advances in deep learning, most medical AI systems operate as opaque black boxes, providing little insight into why a particular diagnosis was reached. In this paper, we introduce Med-CAM, a framework for generating minimal and sharp maps as evidence-based explanations for Medical decision making via Classifier Activation Matching. Med-CAM trains a segmentation network from scratch to produce a mask that highlights the minimal evidence critical to model's decision for any seen or unseen image. This ensures that the explanation is both faithful to the network's behaviour and interpretable to clinicians. Experiments show, unlike prior spatial explanation methods, such as Grad-CAM and attention maps, which yield only fuzzy regions of relative importance, Med-CAM with its superior spatial awareness to shapes, textures, and boundaries, delivers conclusive, evidence-based explanations that faithfully replicate the model's prediction for any given image. By explicitly constraining explanations to be compact, consistent with model activations, and diagnostic alignment, Med-CAM advances transparent AI to foster clinician understanding and trust in high-stakes medical applications such as pathology and radiology.

顶级标签: medical model evaluation computer vision
详细标签: interpretability explainable ai medical imaging classifier activation matching model faithfulness 或 搜索:

Med-CAM:用于解释医疗决策的最小证据 / Med-CAM: Minimal Evidence for Explaining Medical Decision Making


1️⃣ 一句话总结

这篇论文提出了一个名为Med-CAM的新框架,它通过训练一个分割网络来生成清晰、紧凑的视觉证据图,从而直观地解释医疗AI模型做出诊断决策的关键依据,解决了现有方法解释模糊的问题,旨在提升临床医生对AI的信任。

源自 arXiv: 2604.13695