基于图像块的3D医学图像分割的高效KernelSHAP解释方法 / Efficient KernelSHAP Explanations for Patch-based 3D Medical Image Segmentation
1️⃣ 一句话总结
这篇论文提出了一种高效的KernelSHAP框架,通过限制计算区域、缓存预测结果和比较不同特征抽象方法,解决了传统方法在3D医学图像分割中计算成本过高的问题,并揭示了解释的准确性与临床可理解性之间的权衡关系。
Perturbation-based explainability methods such as KernelSHAP provide model-agnostic attributions but are typically impractical for patch-based 3D medical image segmentation due to the large number of coalition evaluations and the high cost of sliding-window inference. We present an efficient KernelSHAP framework for volumetric CT segmentation that restricts computation to a user-defined region of interest and its receptive-field support, and accelerates inference via patch logit caching, reusing baseline predictions for unaffected patches while preserving nnU-Net's fusion scheme. To enable clinically meaningful attributions, we compare three automatically generated feature abstractions within the receptive-field crop: whole-organ units, regular FCC supervoxels, and hybrid organ-aware supervoxels, and we study multiple aggregation/value functions targeting stabilizing evidence (TP/Dice/Soft Dice) or false-positive behavior. Experiments on whole-body CT segmentations show that caching substantially reduces redundant computation (with computational savings ranging from 15% to 30%) and that faithfulness and interpretability exhibit clear trade-offs: regular supervoxels often maximize perturbation-based metrics but lack anatomical alignment, whereas organ-aware units yield more clinically interpretable explanations and are particularly effective for highlighting false-positive drivers under normalized metrics.
基于图像块的3D医学图像分割的高效KernelSHAP解释方法 / Efficient KernelSHAP Explanations for Patch-based 3D Medical Image Segmentation
这篇论文提出了一种高效的KernelSHAP框架,通过限制计算区域、缓存预测结果和比较不同特征抽象方法,解决了传统方法在3D医学图像分割中计算成本过高的问题,并揭示了解释的准确性与临床可理解性之间的权衡关系。
源自 arXiv: 2604.11775