📄
Abstract - A Unified Framework for Multimodal Image Reconstruction and Synthesis using Denoising Diffusion Models
Image reconstruction and image synthesis are important for handling incomplete multimodal imaging data, but existing methods require various task-specific models, complicating training and deployment workflows. We introduce Any2all, a unified framework that addresses this limitation by formulating these disparate tasks as a single virtual inpainting problem. We train a single, unconditional diffusion model on the complete multimodal data stack. This model is then adapted at inference time to ``inpaint'' all target modalities from any combination of inputs of available clean images or noisy measurements. We validated Any2all on a PET/MR/CT brain dataset. Our results show that Any2all can achieve excellent performance on both multimodal reconstruction and synthesis tasks, consistently yielding images with competitive distortion-based performance and superior perceptual quality over specialized methods.
基于去噪扩散模型的多模态图像重建与合成的统一框架 /
A Unified Framework for Multimodal Image Reconstruction and Synthesis using Denoising Diffusion Models
1️⃣ 一句话总结
这篇论文提出了一个名为Any2all的统一框架,它利用单个去噪扩散模型,通过将其视为一个虚拟的‘图像修复’问题,成功解决了多种多模态图像重建与合成任务,避免了为每个任务单独训练模型的繁琐,并在实验中取得了优异且感知质量更好的结果。