学习领域感知的任务提示表示用于多领域一体化图像恢复 / Learning Domain-Aware Task Prompt Representations for Multi-Domain All-in-One Image Restoration
1️⃣ 一句话总结
这篇论文提出了一个名为DATPRL-IR的新方法,它首次实现了用一个模型处理来自不同领域(如自然场景、医学影像等)的多种图像修复任务,其核心是通过学习领域感知的任务提示表示,自适应地结合任务和领域知识来显著提升修复效果和泛化能力。
Recently, significant breakthroughs have been made in all-in-one image restoration (AiOIR), which can handle multiple restoration tasks with a single model. However, existing methods typically focus on a specific image domain, such as natural scene, medical imaging, or remote sensing. In this work, we aim to extend AiOIR to multiple domains and propose the first multi-domain all-in-one image restoration method, DATPRL-IR, based on our proposed Domain-Aware Task Prompt Representation Learning. Specifically, we first construct a task prompt pool containing multiple task prompts, in which task-related knowledge is implicitly encoded. For each input image, the model adaptively selects the most relevant task prompts and composes them into an instance-level task representation via a prompt composition mechanism (PCM). Furthermore, to endow the model with domain awareness, we introduce another domain prompt pool and distill domain priors from multimodal large language models into the domain prompts. PCM is utilized to combine the adaptively selected domain prompts into a domain representation for each input image. Finally, the two representations are fused to form a domain-aware task prompt representation which can make full use of both specific and shared knowledge across tasks and domains to guide the subsequent restoration process. Extensive experiments demonstrate that our DATPRL-IR significantly outperforms existing SOTA image restoration methods, while exhibiting strong generalization capabilities. Code is available at this https URL.
学习领域感知的任务提示表示用于多领域一体化图像恢复 / Learning Domain-Aware Task Prompt Representations for Multi-Domain All-in-One Image Restoration
这篇论文提出了一个名为DATPRL-IR的新方法,它首次实现了用一个模型处理来自不同领域(如自然场景、医学影像等)的多种图像修复任务,其核心是通过学习领域感知的任务提示表示,自适应地结合任务和领域知识来显著提升修复效果和泛化能力。
源自 arXiv: 2603.01725