菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-27
📄 Abstract - SemiSAM-O1: How far can we push the boundary of annotation-efficient medical image segmentation?

Semi-supervised learning (SSL) has become a promising solution to alleviate the annotation burden of deep learning-based medical image segmentation models. While recent advances in foundation model-driven SSL have pushed the boundary to extremely limited annotation scenarios, they fail to maintain robust competitive performance in complex imaging modalities. In this paper, we propose SemiSAM-O1, an annotation-efficient framework using only one annotated template image for segmentation. SemiSAM-O1 extends the specialist-generalist collaborative learning framework to the extreme one-label setting by fully exploiting the foundation model's feature representation capability beyond its prompting interface. SemiSAM-O1 operates in two stages. In the first stage, the foundation model's encoder extracts dense features from all volumes, and class prototypes derived from the single annotated template are propagated to the unlabeled pool via feature similarity to produce coarse initial pseudo-labels. In the second stage, an iterative training-and-refinement loop progressively improves both the segmentation model and the pseudo-labels over multiple rounds, where each round trains the model from scratch on current pseudo-labels and generates updated predictions with voxel-wise uncertainty estimates. An uncertainty-guided refinement step further leverages the foundation model's global feature space to correct high-uncertainty regions by aggregating labels from their most similar confident neighbors, establishing a virtuous cycle of mutual improvement. Extensive experiments on a wide range of segmentation tasks across different modalities and anatomical targets demonstrate that SemiSAM-O1 significantly narrows the performance gap between one-label semi-supervised learning and full supervision, while significantly reducing the computational overhead of online foundation model inference.

顶级标签: medical machine learning model training
详细标签: semi-supervised learning medical image segmentation foundation model pseudo-labeling uncertainty estimation 或 搜索:

SemiSAM-O1:我们能在多大程度上推动高效标注的医学图像分割? / SemiSAM-O1: How far can we push the boundary of annotation-efficient medical image segmentation?


1️⃣ 一句话总结

这篇论文提出了一个名为SemiSAM-O1的医学图像分割框架,它仅需一个标注模板图像,就能通过结合基础模型的强大特征提取能力和一个迭代的伪标签生成与优化循环,在多种复杂的医学影像模态上达到接近全监督学习的效果,同时大幅降低了计算成本。

源自 arXiv: 2604.24109