SurgWorld:通过世界建模从视频中学习手术机器人策略 / SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling
1️⃣ 一句话总结
这篇论文提出了一种名为SurgWorld的新方法,通过构建一个能生成逼真手术视频的虚拟世界模型,并从中推断出机器人动作数据,从而利用大量无标签的手术视频来训练手术机器人,有效解决了真实动作数据稀缺的问题,并显著提升了机器人的操作性能。
Data scarcity remains a fundamental barrier to achieving fully autonomous surgical robots. While large scale vision language action (VLA) models have shown impressive generalization in household and industrial manipulation by leveraging paired video action data from diverse domains, surgical robotics suffers from the paucity of datasets that include both visual observations and accurate robot kinematics. In contrast, vast corpora of surgical videos exist, but they lack corresponding action labels, preventing direct application of imitation learning or VLA training. In this work, we aim to alleviate this problem by learning policy models from SurgWorld, a world model designed for surgical physical AI. We curated the Surgical Action Text Alignment (SATA) dataset with detailed action description specifically for surgical robots. Then we built SurgeWorld based on the most advanced physical AI world model and SATA. It's able to generate diverse, generalizable and realistic surgery videos. We are also the first to use an inverse dynamics model to infer pseudokinematics from synthetic surgical videos, producing synthetic paired video action data. We demonstrate that a surgical VLA policy trained with these augmented data significantly outperforms models trained only on real demonstrations on a real surgical robot platform. Our approach offers a scalable path toward autonomous surgical skill acquisition by leveraging the abundance of unlabeled surgical video and generative world modeling, thus opening the door to generalizable and data efficient surgical robot policies.
SurgWorld:通过世界建模从视频中学习手术机器人策略 / SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling
这篇论文提出了一种名为SurgWorld的新方法,通过构建一个能生成逼真手术视频的虚拟世界模型,并从中推断出机器人动作数据,从而利用大量无标签的手术视频来训练手术机器人,有效解决了真实动作数据稀缺的问题,并显著提升了机器人的操作性能。
源自 arXiv: 2512.23162