菜单

🤖 系统
📄 Abstract - GR-RL: Going Dexterous and Precise for Long-Horizon Robotic Manipulation

We present GR-RL, a robotic learning framework that turns a generalist vision-language-action (VLA) policy into a highly capable specialist for long-horizon dexterous manipulation. Assuming the optimality of human demonstrations is core to existing VLA policies. However, we claim that in highly dexterous and precise manipulation tasks, human demonstrations are noisy and suboptimal. GR-RL proposes a multi-stage training pipeline that filters, augments, and reinforces the demonstrations by reinforcement learning. First, GR-RL learns a vision-language-conditioned task progress, filters the demonstration trajectories, and only keeps the transitions that contribute positively to the progress. Specifically, we show that by directly applying offline RL with sparse reward, the resulting $Q$-values can be treated as a robust progress function. Next, we introduce morphological symmetry augmentation that greatly improves the generalization and performance of GR-RL. Lastly, to better align the VLA policy with its deployment behaviors for high-precision control, we perform online RL by learning a latent space noise predictor. With this pipeline, GR-RL is, to our knowledge, the first learning-based policy that can autonomously lace up a shoe by threading shoelaces through multiple eyelets with an 83.3% success rate, a task requiring long-horizon reasoning, millimeter-level precision, and compliant soft-body interaction. We hope GR-RL provides a step toward enabling generalist robot foundations models to specialize into reliable real-world experts.

顶级标签: robotics reinforcement learning agents
详细标签: dexterous manipulation vision-language-action offline rl policy specialization long-horizon tasks 或 搜索:

GR-RL:面向长周期灵巧与精确机器人操作 / GR-RL: Going Dexterous and Precise for Long-Horizon Robotic Manipulation


1️⃣ 一句话总结

这篇论文提出了一个名为GR-RL的机器人学习框架,它通过多阶段训练流程,将通用的视觉-语言-动作策略升级为能完成复杂长周期灵巧操作(如自主系鞋带)的专家系统,其核心是利用强化学习来筛选、增强并优化原本不完美的人类演示数据。


📄 打开原文 PDF