菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-15
📄 Abstract - RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics

Spatial tracing, as a fundamental embodied interaction ability for robots, is inherently challenging as it requires multi-step metric-grounded reasoning compounded with complex spatial referring and real-world metric measurement. However, existing methods struggle with this compositional task. To this end, we propose RoboTracer, a 3D-aware VLM that first achieves both 3D spatial referring and measuring via a universal spatial encoder and a regression-supervised decoder to enhance scale awareness during supervised fine-tuning (SFT). Moreover, RoboTracer advances multi-step metric-grounded reasoning via reinforcement fine-tuning (RFT) with metric-sensitive process rewards, supervising key intermediate perceptual cues to accurately generate spatial traces. To support SFT and RFT training, we introduce TraceSpatial, a large-scale dataset of 30M QA pairs, spanning outdoor/indoor/tabletop scenes and supporting complex reasoning processes (up to 9 steps). We further present TraceSpatial-Bench, a challenging benchmark filling the gap to evaluate spatial tracing. Experimental results show that RoboTracer surpasses baselines in spatial understanding, measuring, and referring, with an average success rate of 79.1%, and also achieves SOTA performance on TraceSpatial-Bench by a large margin, exceeding Gemini-2.5-Pro by 36% accuracy. Notably, RoboTracer can be integrated with various control policies to execute long-horizon, dynamic tasks across diverse robots (UR5, G1 humanoid) in cluttered real-world scenes.

顶级标签: robotics multi-modal model training
详细标签: spatial reasoning vision-language models reinforcement fine-tuning embodied ai 3d perception 或 搜索:

RoboTracer:让视觉语言模型掌握机器人空间轨迹推理能力 / RoboTracer: Mastering Spatial Trace with Reasoning in Vision-Language Models for Robotics


1️⃣ 一句话总结

这篇论文提出了一种名为RoboTracer的新型视觉语言模型,它通过创新的训练方法让机器人具备了在复杂真实场景中进行多步骤空间推理和精确测量的能力,从而能规划并执行长距离的动态任务。


源自 arXiv: 2512.13660