菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - Explainability in Generative Medical Diffusion Models: A Faithfulness-Based Analysis on MRI Synthesis

This study investigates the explainability of generative diffusion models in the context of medical imaging, focusing on Magnetic resonance imaging (MRI) synthesis. Although diffusion models have shown strong performance in generating realistic medical images, their internal decision making process remains largely opaque. We present a faithfulness-based explainability framework that analyzes how prototype-based explainability methods like ProtoPNet (PPNet), Enhanced ProtoPNet (EPPNet), and ProtoPool can link the relationship between generated and training features. Our study focuses on understanding the reasoning behind image formation through denoising trajectory of diffusion model and subsequently prototype explainability with faithfulness analysis. Experimental analysis shows that EPPNet achieves the highest faithfulness (with score 0.1534), offering more reliable insights, and explainability into the generative process. The results highlight that diffusion models can be made more transparent and trustworthy through faithfulness-based explanations, contributing to safer and more interpretable applications of generative AI in healthcare.

顶级标签: medical model evaluation computer vision
详细标签: explainable ai diffusion models medical imaging faithfulness analysis mri synthesis 或 搜索:

生成式医学扩散模型的可解释性:基于忠实度的MRI合成分析 / Explainability in Generative Medical Diffusion Models: A Faithfulness-Based Analysis on MRI Synthesis


1️⃣ 一句话总结

本研究通过一种基于忠实度的可解释性框架,分析了扩散模型在生成医学影像(如MRI)时的内部决策过程,发现增强型原型网络能提供最可靠的解释,从而提升了生成式AI在医疗应用中的透明度和可信度。

源自 arXiv: 2602.09781