菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-15
📄 Abstract - Jump-Start Reinforcement Learning with Vision-Language-Action Regularization

Reinforcement learning (RL) enables high-frequency, closed-loop control for robotic manipulation, but scaling to long-horizon tasks with sparse or imperfect rewards remains difficult due to inefficient exploration and poor credit assignment. Vision-Language-Action (VLA) models leverage large-scale multimodal pretraining to provide generalist, task-level reasoning, but current limitations hinder their direct use in fast and precise manipulation. In this paper, we propose Vision-Language-Action Jump-Starting (VLAJS), a method that bridges sparse VLA guidance with on-policy RL to improve exploration and learning efficiency. VLAJS treats VLAs as transient sources of high-level action suggestions that bias early exploration and improve credit assignment, while preserving the high-frequency, state-based control of RL. Our approach augments Proximal Policy Optimization (PPO) with a directional action-consistency regularization that softly aligns the RL agent's actions with VLA guidance during early training, without enforcing strict imitation, requiring demonstrations, or relying on continuous teacher queries. VLA guidance is applied sparsely and annealed over time, allowing the agent to adapt online and ultimately surpass the guiding policy. We evaluate VLAJS on six challenging manipulation tasks: lifting, pick-and-place, peg reorientation, peg insertion, poking, and pushing in simulation, and validate a subset on a real Franka Panda robot. VLAJS consistently outperforms PPO and distillation-style baselines in sample efficiency, reducing required environment interactions by over 50% in several tasks. Real-world experiments demonstrate zero-shot sim-to-real transfer and robust execution under clutter, object variation, and external perturbations.

顶级标签: robotics reinforcement learning multi-modal
详细标签: vision-language-action jump-start rl action regularization manipulation sample efficiency 或 搜索:

利用视觉-语言-动作正则化实现强化学习的快速启动 / Jump-Start Reinforcement Learning with Vision-Language-Action Regularization


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过将能理解任务目标但动作缓慢的通用视觉语言模型,与擅长快速精确控制的强化学习算法相结合,让机器人更快学会复杂的操作任务,学习效率提升超过50%。

源自 arXiv: 2604.13733