菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-02
📄 Abstract - Scaling Tasks, Not Samples: Mastering Humanoid Control through Multi-Task Model-Based Reinforcement Learning

Developing generalist robots capable of mastering diverse skills remains a central challenge in embodied AI. While recent progress emphasizes scaling model parameters and offline datasets, such approaches are limited in robotics, where learning requires active interaction. We argue that effective online learning should scale the \emph{number of tasks}, rather than the number of samples per task. This regime reveals a structural advantage of model-based reinforcement learning (MBRL). Because physical dynamics are invariant across tasks, a shared world model can aggregate multi-task experience to learn robust, task-agnostic representations. In contrast, model-free methods suffer from gradient interference when tasks demand conflicting actions in similar states. Task diversity therefore acts as a regularizer for MBRL, improving dynamics learning and sample efficiency. We instantiate this idea with \textbf{EfficientZero-Multitask (EZ-M)}, a sample-efficient multi-task MBRL algorithm for online learning. Evaluated on \textbf{HumanoidBench}, a challenging whole-body control benchmark, EZ-M achieves state-of-the-art performance with significantly higher sample efficiency than strong baselines, without extreme parameter scaling. These results establish task scaling as a critical axis for scalable robotic learning. The project website is available \href{this https URL}{here}.

顶级标签: robotics reinforcement learning model training
详细标签: model-based rl multi-task learning sample efficiency humanoid control online learning 或 搜索:

扩展任务而非样本:通过多任务模型强化学习掌握人形机器人控制 / Scaling Tasks, Not Samples: Mastering Humanoid Control through Multi-Task Model-Based Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种新的机器人学习思路,认为与其在单个任务上收集海量数据,不如让机器人同时学习多种任务,并基于此开发了一种高效的在线学习算法,在复杂的人形机器人控制任务上取得了优异性能且大大节省了训练数据。

源自 arXiv: 2603.01452