多模态引导的多源域自适应目标检测 / Multi-Modal Guided Multi-Source Domain Adaptation for Object Detection
1️⃣ 一句话总结
本文提出一种名为MS-DePro的新方法,通过引入深度图和文本提示两种不依赖数据源类型的通用信息,分别辅助目标检测中的定位和分类任务,从而让目标检测器能更有效地从多个不同来源的数据中学习,并在新场景下取得最佳性能。
General object detection (OD) struggles to detect objects in the target domain that differ from the training distribution. To address this, recent studies demonstrate that training from multiple source domains and explicitly processing them separately for multi-source domain adaptation (MSDA) outperforms blending them for unsupervised domain adaptation (UDA). However, existing MSDA methods learn domain-agnostic features from domain-specific RGB images while preserving domain-specific information from the domain-agnostic feature map. To address this, we propose MS-DePro: Multi-Source Detector with Depth and Prompt, composed of (1) depth-guided localization and (2) multi-modal guided prompt learning. We leverage domain-agnostic input modalities, namely depth maps and text, to encode domain-agnostic characteristics. Specifically, we utilize depth maps to generate domain-agnostic region proposals for localization and integrate multi-modal features to align learnable text embeddings for classification. MS-DePro achieves state-of-the-art performance on MSDA benchmarks, and comprehensive ablations demonstrate the effectiveness of our contributions. Our code is available on this https URL.
多模态引导的多源域自适应目标检测 / Multi-Modal Guided Multi-Source Domain Adaptation for Object Detection
本文提出一种名为MS-DePro的新方法,通过引入深度图和文本提示两种不依赖数据源类型的通用信息,分别辅助目标检测中的定位和分类任务,从而让目标检测器能更有效地从多个不同来源的数据中学习,并在新场景下取得最佳性能。
源自 arXiv: 2605.13140