Libra-VLA:通过异步粗细双系统实现学习均衡 / Libra-VLA: Achieving Learning Equilibrium via Asynchronous Coarse-to-Fine Dual-System
1️⃣ 一句话总结
为了解决机器人操作中高层语义指令与底层连续动作之间的鸿沟,本文提出Libra-VLA模型,将复杂动作分解为宏观方向决策和微观精细调整两个子系统,并通过异步执行和训练难度平衡,显著提升了机器人在开放世界中的操作性能。
Vision-Language-Action (VLA) models are a promising paradigm for generalist robotic manipulation by grounding high-level semantic instructions into executable physical actions. However, prevailing approaches typically adopt a monolithic generation paradigm, directly mapping visual-linguistic features to high-frequency motor commands in a flat, non-hierarchical fashion. This strategy overlooks the inherent hierarchy of robotic manipulation, where complex actions can be naturally modeled in a Hybrid Action Space, decomposing into discrete macro-directional reaching and continuous micro-pose alignment, severely widening the semantic-actuation gap and imposing a heavy representational burden on grounding high-level semantics to continuous actions. To address this, we introduce Libra-VLA, a novel Coarse-to-Fine Dual-System VLA architecture. We explicitly decouple the learning complexity into a coarse-to-fine hierarchy to strike a training equilibrium, while simultaneously leveraging this structural modularity to implement an asynchronous execution strategy. The Semantic Planner predicts discrete action tokens capturing macro-directional intent, while the Action Refiner conditions on coarse intent to generate high-frequency continuous actions for precise alignment. Crucially, our empirical analysis reveals that performance follows an inverted-U curve relative to action decomposition granularity, peaking exactly when the learning difficulty is balanced between the two sub-systems. With the asynchronous design, our approach offers a scalable, robust, and responsive solution for open-world manipulation.
Libra-VLA:通过异步粗细双系统实现学习均衡 / Libra-VLA: Achieving Learning Equilibrium via Asynchronous Coarse-to-Fine Dual-System
为了解决机器人操作中高层语义指令与底层连续动作之间的鸿沟,本文提出Libra-VLA模型,将复杂动作分解为宏观方向决策和微观精细调整两个子系统,并通过异步执行和训练难度平衡,显著提升了机器人在开放世界中的操作性能。
源自 arXiv: 2604.24921