ECHO:面向语言到运动控制的边缘-云人形机器人协同框架 / ECHO: Edge-Cloud Humanoid Orchestration for Language-to-Motion Control
1️⃣ 一句话总结
本文提出了一个名为ECHO的系统,它利用云端AI将文字指令生成动作,再通过部署在机器人本地的控制器稳定执行,从而实现了让人形机器人直接听懂并完成复杂动作指令的目标。
We present ECHO, an edge--cloud framework for language-driven whole-body control of humanoid robots. A cloud-hosted diffusion-based text-to-motion generator synthesizes motion references from natural language instructions, while an edge-deployed reinforcement-learning tracker executes them in closed loop on the robot. The two modules are bridged by a compact, robot-native 38-dimensional motion representation that encodes joint angles, root planar velocity, root height, and a continuous 6D root orientation per frame, eliminating inference-time retargeting from human body models and remaining directly compatible with low-level PD control. The generator adopts a 1D convolutional UNet with cross-attention conditioned on CLIP-encoded text features; at inference, DDIM sampling with 10 denoising steps and classifier-free guidance produces motion sequences in approximately one second on a cloud GPU. The tracker follows a Teacher--Student paradigm: a privileged teacher policy is distilled into a lightweight student equipped with an evidential adaptation module for sim-to-real transfer, further strengthened by morphological symmetry constraints and domain randomization. An autonomous fall recovery mechanism detects falls via onboard IMU readings and retrieves recovery trajectories from a pre-built motion library. We evaluate ECHO on a retargeted HumanML3D benchmark, where it achieves strong generation quality (FID 0.029, R-Precision Top-1 0.686) under a unified robot-domain evaluator, while maintaining high motion safety and trajectory consistency. Real-world experiments on a Unitree G1 humanoid demonstrate stable execution of diverse text commands with zero hardware fine-tuning.
ECHO:面向语言到运动控制的边缘-云人形机器人协同框架 / ECHO: Edge-Cloud Humanoid Orchestration for Language-to-Motion Control
本文提出了一个名为ECHO的系统,它利用云端AI将文字指令生成动作,再通过部署在机器人本地的控制器稳定执行,从而实现了让人形机器人直接听懂并完成复杂动作指令的目标。
源自 arXiv: 2603.16188