菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - When Fine-Tuning Changes the Evidence: Architecture-Dependent Semantic Drift in Chest X-Ray Explanations

Transfer learning followed by fine-tuning is widely adopted in medical image classification due to consistent gains in diagnostic performance. However, in multi-class settings with overlapping visual features, improvements in accuracy do not guarantee stability of the visual evidence used to support predictions. We define semantic drift as systematic changes in the attribution structure supporting a model's predictions between transfer learning and full fine-tuning, reflecting potential shifts in underlying visual reasoning despite stable classification performance. Using a five-class chest X-ray task, we evaluate DenseNet201, ResNet50V2, and InceptionV3 under a two-stage training protocol and quantify drift with reference-free metrics capturing spatial localization and structural consistency of attribution maps. Across architectures, coarse anatomical localization remains stable, while overlap IoU reveals pronounced architecture-dependent reorganization of evidential structure. Beyond single-method analysis, stability rankings can reverse across LayerCAM and GradCAM++ under converged predictive performance, establishing explanation stability as an interaction between architecture, optimization phase, and attribution objective.

顶级标签: medical model evaluation computer vision
详细标签: explainable ai semantic drift fine-tuning chest x-ray attribution maps 或 搜索:

当微调改变证据:胸部X光解释中依赖于架构的语义漂移 / When Fine-Tuning Changes the Evidence: Architecture-Dependent Semantic Drift in Chest X-Ray Explanations


1️⃣ 一句话总结

这篇论文研究发现,在医学影像多分类任务中,对预训练模型进行微调虽然能提升诊断准确率,但会导致模型做出预测所依赖的视觉证据发生系统性改变,且这种改变程度因模型架构和解释方法的不同而有显著差异。

源自 arXiv: 2604.08513