RoboVIP:通过视觉身份提示生成多视角视频以增强机器人操作 / RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation
1️⃣ 一句话总结
这篇论文提出了一种名为RoboVIP的新方法,它通过向图像生成模型提供示例图片作为视觉引导,来批量生成多视角、时间连贯的机器人操作视频数据,从而有效提升机器人策略模型的训练效果。
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.
RoboVIP:通过视觉身份提示生成多视角视频以增强机器人操作 / RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation
这篇论文提出了一种名为RoboVIP的新方法,它通过向图像生成模型提供示例图片作为视觉引导,来批量生成多视角、时间连贯的机器人操作视频数据,从而有效提升机器人策略模型的训练效果。
源自 arXiv: 2601.05241