📄
Abstract - EgoMotion: Hierarchical Reasoning and Diffusion for Egocentric Vision-Language Motion Generation
Faithfully modeling human behavior in dynamic environments is a foundational challenge for embodied intelligence. While conditional motion synthesis has achieved significant advances, egocentric motion generation remains largely underexplored due to the inherent complexity of first-person perception. In this work, we investigate Egocentric Vision-Language (Ego-VL) motion generation. This task requires synthesizing 3D human motion conditioned jointly on first-person visual observations and natural language instructions. We identify a critical \textit{reasoning-generation entanglement} challenge: the simultaneous optimization of semantic reasoning and kinematic modeling introduces gradient conflicts. These conflicts systematically degrade the fidelity of multimodal grounding and motion quality. To address this challenge, we propose a hierarchical generative framework \textbf{EgoMotion}. Inspired by the biological decoupling of cognitive reasoning and motor control, EgoMotion operates in two stages. In the Cognitive Reasoning stage, A vision-language model (VLM) projects multimodal inputs into a structured space of discrete motion primitives. This forces the VLM to acquire goal-consistent representations, effectively bridging the semantic gap between high-level perceptual understanding and low-level action execution. In the Motion Generation stage, these learned representations serve as expressive conditioning signals for a diffusion-based motion generator. By performing iterative denoising within a continuous latent space, the generator synthesizes physically plausible and temporally coherent trajectories. Extensive evaluations demonstrate that EgoMotion achieves state-of-the-art performance, and produces motion sequences that are both semantically grounded and kinematically superior to existing approaches.
EgoMotion:面向第一人称视角的视觉-语言运动生成的层级推理与扩散方法 /
EgoMotion: Hierarchical Reasoning and Diffusion for Egocentric Vision-Language Motion Generation
1️⃣ 一句话总结
本文提出了一种名为EgoMotion的两阶段生成框架,先通过视觉语言模型进行认知推理以理解第一人称视角下的场景和指令,再使用扩散模型生成连贯且符合物理规律的人体运动,有效解决了推理与生成相互干扰的难题,在性能上超越了现有方法。