菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-01
📄 Abstract - CoSMo3D: Open-World Promptable 3D Semantic Part Segmentation through LLM-Guided Canonical Spatial Modeling

Open-world promptable 3D semantic segmentation remains brittle as semantics are inferred in the input sensor coordinates. Yet, humans, in contrast, interpret parts via functional roles in a canonical space -- wings extend laterally, handles protrude to the side, and legs support from below. Psychophysical evidence shows that we mentally rotate objects into canonical frames to reveal these roles. To fill this gap, we propose \methodName{}, which attains canonical space perception by inducing a latent canonical reference frame learned directly from data. By construction, we create a unified canonical dataset through LLM-guided intra- and cross-category alignment, exposing canonical spatial regularities across 200 categories. By induction, we realize canonicality inside the model through a dual-branch architecture with canonical map anchoring and canonical box calibration, collapsing pose variation and symmetry into a stable canonical embedding. This shift from input pose space to canonical embedding yields far more stable and transferable part semantics. Experimental results show that \methodName{} establishes new state of the art in open-world promptable 3D segmentation.

顶级标签: computer vision natural language processing multi-modal
详细标签: 3d segmentation canonical representation llm-guided alignment open-world perception semantic part segmentation 或 搜索:

CoSMo3D:通过大语言模型引导的规范空间建模实现开放世界可提示的3D语义部件分割 / CoSMo3D: Open-World Promptable 3D Semantic Part Segmentation through LLM-Guided Canonical Spatial Modeling


1️⃣ 一句话总结

这篇论文提出了一种名为CoSMo3D的新方法,通过让AI模型学习一个‘标准视角’来理解3D物体部件的功能(如翅膀在侧面、腿在下方),从而大幅提升了用自然语言指令分割任意3D物体部件的准确性和稳定性。

源自 arXiv: 2603.01205