菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Align then Refine: Text-Guided 3D Prostate Lesion Segmentation

Automated 3D segmentation of prostate lesions from biparametric MRI (bp-MRI) is essential for reliable algorithmic analysis, but achieving high precision remains challenging. Volumetric methods must combine multiple modalities while ensuring anatomical consistency, but current models struggle to integrate cross-modal information reliably. While vision-language models (VLMs) are replacing the currently used architectural designs, they still lack the fine-grained, lesion-level semantics required for effective localized guidance. To address these limitations, we propose a new multi-encoder U-Net architecture incorporating three key innovations: (1) an alignment loss that enhances foreground text-image similarity to inject lesion semantics; (2) a heatmap loss that calibrates the similarity map and suppresses spurious background activations; and (3) a final-stage, confidence-gated multi-head cross-attention refiner that performs localized boundary edits in high-confidence regions. A phase-scheduled training regime stabilizes the optimization of these components. Our method consistently outperforms prior approaches, establishing a new state-of-the-art on the PI-CAI dataset through enhanced multi-modal fusion and localized text guidance. Our code is available at this https URL.

顶级标签: medical multi-modal computer vision
详细标签: prostate lesion segmentation vision-language models cross-modal fusion u-net architecture 或 搜索:

先对齐后细化:文本引导的前列腺病灶三维分割 / Align then Refine: Text-Guided 3D Prostate Lesion Segmentation


1️⃣ 一句话总结

本文提出了一种结合文本与影像信息的前列腺病灶三维分割方法,通过先让模型学习病灶区域与文字描述的对齐,再对高置信区域进行精细化边缘修正,显著提升了分割精度,并在公开数据集上达到了最佳效果。

源自 arXiv: 2604.18713