基于几何一致性的跨视角激光雷达语义分割领域泛化研究 / Cross-view Domain Generalization via Geometric Consistency for LiDAR Semantic Segmentation
1️⃣ 一句话总结
这篇论文提出了一种名为CVGC的新方法,通过模拟不同观测视角下的几何变化并强制模型对同一场景的不同视角点云做出一致的预测,有效提升了激光雷达语义分割模型在从未见过的、观测视角差异巨大的新环境中的适应能力。
Domain-generalized LiDAR semantic segmentation (LSS) seeks to train models on source-domain point clouds that generalize reliably to multiple unseen target domains, which is essential for real-world LiDAR applications. However, existing approaches assume similar acquisition views (e.g., vehicle-mounted) and struggle in cross-view scenarios, where observations differ substantially due to viewpoint-dependent structural incompleteness and non-uniform point density. Accordingly, we formulate cross-view domain generalization for LiDAR semantic segmentation and propose a novel framework, termed CVGC (Cross-View Geometric Consistency). Specifically, we introduce a cross-view geometric augmentation module that models viewpoint-induced variations in visibility and sampling density, generating multiple cross-view observations of the same scene. Subsequently, a geometric consistency module enforces consistent semantic and occupancy predictions across geometrically augmented point clouds of the same scene. Extensive experiments on six public LiDAR datasets establish the first systematic evaluation of cross-view domain generalization for LiDAR semantic segmentation, demonstrating that CVGC consistently outperforms state-of-the-art methods when generalizing from a single source domain to multiple target domains with heterogeneous acquisition viewpoints. The source code will be publicly available at this https URL
基于几何一致性的跨视角激光雷达语义分割领域泛化研究 / Cross-view Domain Generalization via Geometric Consistency for LiDAR Semantic Segmentation
这篇论文提出了一种名为CVGC的新方法,通过模拟不同观测视角下的几何变化并强制模型对同一场景的不同视角点云做出一致的预测,有效提升了激光雷达语义分割模型在从未见过的、观测视角差异巨大的新环境中的适应能力。
源自 arXiv: 2602.14525