Olaf-World:面向视频世界建模的潜在动作定向 / Olaf-World: Orienting Latent Actions for Video World Modeling
1️⃣ 一句话总结
这篇论文提出了一种名为Olaf-World的新方法,通过一种创新的序列级对齐目标,从海量无标签视频中学习出具有通用语义、能跨场景迁移的潜在动作表示,从而显著提升了视频世界模型的动作控制能力和数据利用效率。
Scaling action-controllable world models is limited by the scarcity of action labels. While latent action learning promises to extract control interfaces from unlabeled video, learned latents often fail to transfer across contexts: they entangle scene-specific cues and lack a shared coordinate system. This occurs because standard objectives operate only within each clip, providing no mechanism to align action semantics across contexts. Our key insight is that although actions are unobserved, their semantic effects are observable and can serve as a shared reference. We introduce Seq$\Delta$-REPA, a sequence-level control-effect alignment objective that anchors integrated latent action to temporal feature differences from a frozen, self-supervised video encoder. Building on this, we present Olaf-World, a pipeline that pretrains action-conditioned video world models from large-scale passive video. Extensive experiments demonstrate that our method learns a more structured latent action space, leading to stronger zero-shot action transfer and more data-efficient adaptation to new control interfaces than state-of-the-art baselines.
Olaf-World:面向视频世界建模的潜在动作定向 / Olaf-World: Orienting Latent Actions for Video World Modeling
这篇论文提出了一种名为Olaf-World的新方法,通过一种创新的序列级对齐目标,从海量无标签视频中学习出具有通用语义、能跨场景迁移的潜在动作表示,从而显著提升了视频世界模型的动作控制能力和数据利用效率。
源自 arXiv: 2602.10104