XEmbodied:一个为大规模具身环境增强几何与物理线索的基础模型 / XEmbodied: A Foundation Model with Enhanced Geometric and Physical Cues for Large-Scale Embodied Environments
1️⃣ 一句话总结
这篇论文提出了一个名为XEmbodied的新型基础模型,它通过巧妙融合三维几何信息和物理线索,显著提升了智能体在大规模真实环境中的空间理解、交互与泛化能力。
Vision-Language-Action (VLA) models drive next-generation autonomous systems, but training them requires scalable, high-quality annotations from complex environments. Current cloud pipelines rely on generic vision-language models (VLMs) that lack geometric reasoning and domain semantics due to their 2D image-text pretraining. To address this mismatch, we propose XEmbodied, a cloud-side foundation model that endows VLMs with intrinsic 3D geometric awareness and interaction with physical cues (e.g., occupancy grids, 3D boxes). Instead of treating geometry as auxiliary input, XEmbodied integrates geometric representations via a structured 3D Adapter and distills physical signals into context tokens using an Efficient Image-Embodied Adapter. Through progressive domain curriculum and reinforcement learning post-training, XEmbodied preserves general capabilities while demonstrating robust performance across 18 public benchmarks. It significantly improves spatial reasoning, traffic semantics, embodied affordance, and out-of-distribution generalization for large-scale scenario mining and embodied VQA.
XEmbodied:一个为大规模具身环境增强几何与物理线索的基础模型 / XEmbodied: A Foundation Model with Enhanced Geometric and Physical Cues for Large-Scale Embodied Environments
这篇论文提出了一个名为XEmbodied的新型基础模型,它通过巧妙融合三维几何信息和物理线索,显著提升了智能体在大规模真实环境中的空间理解、交互与泛化能力。
源自 arXiv: 2604.18484