Squint:用于仿真到现实机器人的快速视觉强化学习 / Squint: Fast Visual Reinforcement Learning for Sim-to-Real Robotics
1️⃣ 一句话总结
这篇论文提出了一种名为Squint的新型视觉强化学习方法,它通过结合并行仿真、分布化评估器等多种优化技术,能在单张GPU上仅用数分钟快速训练出机器人视觉操控策略,并成功从仿真环境迁移到真实机器人上。
Visual reinforcement learning is appealing for robotics but expensive -- off-policy methods are sample-efficient yet slow; on-policy methods parallelize well but waste samples. Recent work has shown that off-policy methods can train faster than on-policy methods in wall-clock time for state-based control. Extending this to vision remains challenging, where high-dimensional input images complicate training dynamics and introduce substantial storage and encoding overhead. To address these challenges, we introduce Squint, a visual Soft Actor Critic method that achieves faster wall-clock training than prior visual off-policy and on-policy methods. Squint achieves this via parallel simulation, a distributional critic, resolution squinting, layer normalization, a tuned update-to-data ratio, and an optimized implementation. We evaluate on the SO-101 Task Set, a new suite of eight manipulation tasks in ManiSkill3 with heavy domain randomization, and demonstrate sim-to-real transfer to a real SO-101 robot. We train policies for 15 minutes on a single RTX 3090 GPU, with most tasks converging in under 6 minutes.
Squint:用于仿真到现实机器人的快速视觉强化学习 / Squint: Fast Visual Reinforcement Learning for Sim-to-Real Robotics
这篇论文提出了一种名为Squint的新型视觉强化学习方法,它通过结合并行仿真、分布化评估器等多种优化技术,能在单张GPU上仅用数分钟快速训练出机器人视觉操控策略,并成功从仿真环境迁移到真实机器人上。
源自 arXiv: 2602.21203