菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - MoRL: Reinforced Reasoning for Unified Motion Understanding and Generation

Human motion understanding and generation are crucial for vision and robotics but remain limited in reasoning capability and test-time planning. We propose MoRL, a unified multimodal motion model trained with supervised fine-tuning and reinforcement learning with verifiable rewards. Our task-specific reward design combines semantic alignment and reasoning coherence for understanding with physical plausibility and text-motion consistency for generation, improving both logical reasoning and perceptual realism. To further enhance inference, we introduce Chain-of-Motion (CoM), a test-time reasoning method that enables step-by-step planning and reflection. We also construct two large-scale CoT datasets, MoUnd-CoT-140K and MoGen-CoT-140K, to align motion sequences with reasoning traces and action descriptions. Experiments on HumanML3D and KIT-ML show that MoRL achieves significant gains over state-of-the-art baselines. Code: this https URL. Website: this https URL.

顶级标签: multi-modal agents reinforcement learning
详细标签: motion understanding motion generation reasoning chain-of-motion human motion 或 搜索:

MoRL:用于统一运动理解与生成的强化推理模型 / MoRL: Reinforced Reasoning for Unified Motion Understanding and Generation


1️⃣ 一句话总结

这篇论文提出了一个名为MoRL的统一模型,它通过结合监督学习和强化学习来理解和生成人体运动,并引入了一种名为“运动链”的推理方法,让模型能像人一样一步步思考和规划动作,从而在逻辑推理和动作真实性上都取得了更好的效果。

源自 arXiv: 2602.14534