LOME:基于动作条件的自我中心世界模型学习人-物操控 / LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model
1️⃣ 一句话总结
这篇论文提出了一个名为LOME的自我中心世界模型,它能够根据一张图片、一段文字描述以及每帧的人体动作(包括身体姿势和手势)来生成逼真的人与物体交互视频,在动作控制的精确性、对新场景的泛化能力以及物理交互的真实感方面都超越了现有方法。
Learning human-object manipulation presents significant challenges due to its fine-grained and contact-rich nature of the motions involved. Traditional physics-based animation requires extensive modeling and manual setup, and more importantly, it neither generalizes well across diverse object morphologies nor scales effectively to real-world environment. To address these limitations, we introduce LOME, an egocentric world model that can generate realistic human-object interactions as videos conditioned on an input image, a text prompt, and per-frame human actions, including both body poses and hand gestures. LOME injects strong and precise action guidance into object manipulation by jointly estimating spatial human actions and the environment contexts during training. After finetuning a pretrained video generative model on videos of diverse egocentric human-object interactions, LOME demonstrates not only high action-following accuracy and strong generalization to unseen scenarios, but also realistic physical consequences of hand-object interactions, e.g., liquid flowing from a bottle into a mug after executing a ``pouring'' action. Extensive experiments demonstrate that our video-based framework significantly outperforms state-of-the-art image based and video-based action-conditioned methods and Image/Text-to-Video (I/T2V) generative model in terms of both temporal consistency and motion control. LOME paves the way for photorealistic AR/VR experiences and scalable robotic training, without being limited to simulated environments or relying on explicit 3D/4D modeling.
LOME:基于动作条件的自我中心世界模型学习人-物操控 / LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model
这篇论文提出了一个名为LOME的自我中心世界模型,它能够根据一张图片、一段文字描述以及每帧的人体动作(包括身体姿势和手势)来生成逼真的人与物体交互视频,在动作控制的精确性、对新场景的泛化能力以及物理交互的真实感方面都超越了现有方法。
源自 arXiv: 2603.27449