监督辅助的多模态融合扩散模型用于PET图像恢复 / Supervise-assisted Multi-modality Fusion Diffusion Model for PET Restoration
1️⃣ 一句话总结
这篇论文提出了一种结合磁共振图像辅助的扩散模型,通过创新的多模态特征融合和两阶段监督学习策略,有效解决了低剂量PET图像恢复中结构不一致和分布外数据不匹配的难题,显著提升了图像质量。
Positron emission tomography (PET) offers powerful functional imaging but involves radiation exposure. Efforts to reduce this exposure by lowering the radiotracer dose or scan time can degrade image quality. While using magnetic resonance (MR) images with clearer anatomical information to restore standard-dose PET (SPET) from low-dose PET (LPET) is a promising approach, it faces challenges with the inconsistencies in the structure and texture of multi-modality fusion, as well as the mismatch in out-of-distribution (OOD) data. In this paper, we propose a supervise-assisted multi-modality fusion diffusion model (MFdiff) for addressing these challenges for high-quality PET restoration. Firstly, to fully utilize auxiliary MR images without introducing extraneous details in the restored image, a multi-modality feature fusion module is designed to learn an optimized fusion feature. Secondly, using the fusion feature as an additional condition, high-quality SPET images are iteratively generated based on the diffusion model. Furthermore, we introduce a two-stage supervise-assisted learning strategy that harnesses both generalized priors from simulated in-distribution datasets and specific priors tailored to in-vivo OOD data. Experiments demonstrate that the proposed MFdiff effectively restores high-quality SPET images from multi-modality inputs and outperforms state-of-the-art methods both qualitatively and quantitatively.
监督辅助的多模态融合扩散模型用于PET图像恢复 / Supervise-assisted Multi-modality Fusion Diffusion Model for PET Restoration
这篇论文提出了一种结合磁共振图像辅助的扩散模型,通过创新的多模态特征融合和两阶段监督学习策略,有效解决了低剂量PET图像恢复中结构不一致和分布外数据不匹配的难题,显著提升了图像质量。
源自 arXiv: 2602.11545