扩散模型作为通用分割学习器 / Diffusion Model as a Generalist Segmentation Learner
1️⃣ 一句话总结
本文提出DiGSeg框架,利用预训练扩散模型的去噪过程作为通用分割工具,通过将图像和掩码编码为条件信号并结合文本特征,在无需领域定制的情况下,在语义分割、开放词汇分割以及医疗、遥感等跨领域任务中均达到领先性能,从而将扩散模型从图像生成器转变为多功能视觉理解器。
Diffusion models are primarily trained for image synthesis, yet their denoising trajectories encode rich, spatially aligned visual priors. In this paper, we demonstrate that these priors can be utilized for text-conditioned semantic and open-vocabulary segmentation, and this approach can be generalized to various downstream tasks to make a general-purpose diffusion segmentation framework. Concretely, we introduce DiGSeg (Diffusion Models as a Generalist Segmentation Learner), which repurposes a pretrained diffusion model into a unified segmentation framework. Our approach encodes the input image and ground-truth mask into the latent space and concatenates them as conditioning signals for the diffusion U-Net. A parallel CLIP-aligned text pathway injects language features across multiple scales, enabling the model to align textual queries with evolving visual representations. This design transforms an off-the-shelf diffusion backbone into a universal interface that produces structured segmentation masks conditioned on both appearance and arbitrary text prompts. Extensive experiments demonstrate state-of-the-art performance on standard semantic segmentation benchmarks, as well as strong open-vocabulary generalization and cross-domain transfer to medical, remote sensing, and agricultural scenarios-without domain-specific architectural customization. These results indicate that modern diffusion backbones can serve as generalist segmentation learners rather than pure generators, narrowing the gap between visual generation and visual understanding.
扩散模型作为通用分割学习器 / Diffusion Model as a Generalist Segmentation Learner
本文提出DiGSeg框架,利用预训练扩散模型的去噪过程作为通用分割工具,通过将图像和掩码编码为条件信号并结合文本特征,在无需领域定制的情况下,在语义分割、开放词汇分割以及医疗、遥感等跨领域任务中均达到领先性能,从而将扩散模型从图像生成器转变为多功能视觉理解器。
源自 arXiv: 2604.24575