TriFusion-SR:联合三模态医学图像融合与超分辨率 / TriFusion-SR: Joint Tri-Modal Medical Image Fusion and SR
1️⃣ 一句话总结
这篇论文提出了一种名为TriFusionSR的新方法,它通过一个基于小波变换和条件扩散模型的框架,将三种不同医学图像的融合与分辨率提升两个步骤合二为一,有效解决了传统分步处理导致的图像质量下降问题,从而获得了更清晰、信息更全面的融合图像。
Multimodal medical image fusion facilitates comprehensive diagnosis by aggregating complementary structural and functional information, but its effectiveness is limited by resolution degradation and modality discrepancies. Existing approaches typically perform image fusion and super-resolution (SR) in separate stages, leading to artifacts and degraded perceptual quality. These limitations are further amplified in tri-modal settings that combine anatomical modalities (e.g., MRI, CT) with functional scans (e.g., PET, SPECT) due to pronounced frequency domain imbalances. We propose TriFusionSR, a wavelet-guided conditional diffusion framework for joint tri-modal fusion and SR. The framework explicitly decomposes multimodal features into frequency bands using the 2D Discrete Wavelet Transform, enabling frequency-aware crossmodal interaction. We further introduce a Rectified Wavelet Features (RWF) strategy for latent coefficient calibration, followed by an Adaptive Spatial-Frequency Fusion (ASFF) module with gated channel-spatial attention to enable structure-driven multimodal refinement. Extensive experiments demonstrate state-of-the-art performance, achieving 4.8-12.4% PSNR improvement and substantial reductions in RMSE and LPIPS across multiple upsampling scales.
TriFusion-SR:联合三模态医学图像融合与超分辨率 / TriFusion-SR: Joint Tri-Modal Medical Image Fusion and SR
这篇论文提出了一种名为TriFusionSR的新方法,它通过一个基于小波变换和条件扩散模型的框架,将三种不同医学图像的融合与分辨率提升两个步骤合二为一,有效解决了传统分步处理导致的图像质量下降问题,从而获得了更清晰、信息更全面的融合图像。
源自 arXiv: 2603.09702