菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Seeing Beyond: Extrapolative Domain Adaptive Panoramic Segmentation

Cross-domain panoramic semantic segmentation has attracted growing interest as it enables comprehensive 360° scene understanding for real-world applications. However, it remains particularly challenging due to severe geometric Field of View (FoV) distortions and inconsistent open-set semantics across domains. In this work, we formulate an open-set domain adaptation setting, and propose Extrapolative Domain Adaptive Panoramic Segmentation (EDA-PSeg) framework that trains on local perspective views and tests on full 360° panoramic images, explicitly tackling both geometric FoV shifts across domains and semantic uncertainty arising from previously unseen classes. To this end, we propose the Euler-Margin Attention (EMA), which introduces an angular margin to enhance viewpoint-invariant semantic representation, while performing amplitude and phase modulation to improve generalization toward unseen classes. Additionally, we design the Graph Matching Adapter (GMA), which builds high-order graph relations to align shared semantics across FoV shifts while effectively separating novel categories through structural adaptation. Extensive experiments on four benchmark datasets under camera-shift, weather-condition, and open-set scenarios demonstrate that EDA-PSeg achieves state-of-the-art performance, robust generalization to diverse viewing geometries, and resilience under varying environmental conditions. The code is available at this https URL.

顶级标签: computer vision model training model evaluation
详细标签: domain adaptation panoramic segmentation open-set learning geometric distortion semantic alignment 或 搜索:

超越所见:用于全景分割的外推式域适应方法 / Seeing Beyond: Extrapolative Domain Adaptive Panoramic Segmentation


1️⃣ 一句话总结

本文提出了一种名为EDA-PSeg的新框架,通过创新的角度注意力机制和图匹配适配器,解决了在训练时使用普通视角图像、测试时使用360度全景图像所面临的几何变形和未知类别识别的难题,从而实现了跨域环境下更鲁棒的全景场景理解。

源自 arXiv: 2603.15475