菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - Med-SegLens: Latent-Level Model Diffing for Interpretable Medical Image Segmentation

Modern segmentation models achieve strong predictive performance but remain largely opaque, limiting our ability to diagnose failures, understand dataset shift, or intervene in a principled manner. We introduce Med-SegLens, a model-diffing framework that decomposes segmentation model activations into interpretable latent features using sparse autoencoders trained on SegFormer and U-Net. Through cross-architecture and cross-dataset latent alignment across healthy, adult, pediatric, and sub-Saharan African glioma cohorts, we identify a stable backbone of shared representations, while dataset shift is driven by differential reliance on population-specific latents. We show that these latents act as causal bottlenecks for segmentation failures, and that targeted latent-level interventions can correct errors and improve cross-dataset adaption without retraining, recovering performance in 70% of failure cases and improving Dice score from 39.4% to 74.2%. Our results demonstrate that latent-level model diffing provides a practical and mechanistic tool for diagnosing failures and mitigating dataset shift in segmentation models.

顶级标签: medical computer vision model evaluation
详细标签: medical image segmentation model interpretability latent feature analysis dataset shift sparse autoencoders 或 搜索:

Med-SegLens:用于可解释医学图像分割的潜在层模型差异分析框架 / Med-SegLens: Latent-Level Model Diffing for Interpretable Medical Image Segmentation


1️⃣ 一句话总结

这篇论文提出了一个名为Med-SegLens的框架,它通过分析医学图像分割模型内部的潜在特征,来解释模型失败的原因并定位数据差异,从而无需重新训练就能修复大量分割错误并提升模型在不同人群数据上的适应性。

源自 arXiv: 2602.10508