菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Retrieval-Augmented Robots via Retrieve-Reason-Act

To achieve general-purpose utility, we argue that robots must evolve from passive executors into active Information Retrieval users. In strictly zero-shot settings where no prior demonstrations exist, robots face a critical information gap, such as the exact sequence required to assemble a complex furniture kit, that cannot be satisfied by internal parametric knowledge (common sense) or past internal memory. While recent robotic works attempt to use search before action, they primarily focus on retrieving past kinematic trajectories (analogous to searching internal memory) or text-based safety rules (searching for constraints). These approaches fail to address the core information need of active task construction: acquiring unseen procedural knowledge from external, unstructured documentation. In this paper, we define the paradigm as Retrieval-Augmented Robotics (RAR), empowering the robot with the information-seeking capability that bridges the gap between visual documentation and physical actuation. We formulate the task execution as an iterative Retrieve-Reason-Act loop: the robot or embodied agent actively retrieves relevant visual procedural manuals from an unstructured corpus, grounds the abstract 2D diagrams to 3D physical parts via cross-modal alignment, and synthesizes executable plans. We validate this paradigm on a challenging long-horizon assembly benchmark. Our experiments demonstrate that grounding robotic planning in retrieved visual documents significantly outperforms baselines relying on zero-shot reasoning or few-shot example retrieval. This work establishes the basis of RAR, extending the scope of Information Retrieval from answering user queries to driving embodied physical actions.

顶级标签: robotics agents systems
详细标签: retrieval-augmented robotics embodied agents visual procedural retrieval task planning long-horizon assembly 或 搜索:

通过检索-推理-执行实现检索增强型机器人 / Retrieval-Augmented Robots via Retrieve-Reason-Act


1️⃣ 一句话总结

这篇论文提出了一种让机器人主动从外部文档中检索视觉操作指南,并通过迭代的‘检索-推理-执行’循环来完成任务的新方法,解决了机器人在没有先验演示时执行复杂任务的信息缺口问题。

源自 arXiv: 2603.02688