菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-18
📄 Abstract - Animate Any Character in Any World

Recent advances in world models have greatly enhanced interactive environment simulation. Existing methods mainly fall into two categories: (1) static world generation models, which construct 3D environments without active agents, and (2) controllable-entity models, which allow a single entity to perform limited actions in an otherwise uncontrollable environment. In this work, we introduce AniX, leveraging the realism and structural grounding of static world generation while extending controllable-entity models to support user-specified characters capable of performing open-ended actions. Users can provide a 3DGS scene and a character, then direct the character through natural language to perform diverse behaviors from basic locomotion to object-centric interactions while freely exploring the environment. AniX synthesizes temporally coherent video clips that preserve visual fidelity with the provided scene and character, formulated as a conditional autoregressive video generation problem. Built upon a pre-trained video generator, our training strategy significantly enhances motion dynamics while maintaining generalization across actions and characters. Our evaluation covers a broad range of aspects, including visual quality, character consistency, action controllability, and long-horizon coherence.

顶级标签: computer vision video generation multi-modal
详细标签: character animation 3dgs scene conditional autoregressive generation world models interactive simulation 或 搜索:

在任何世界中为任何角色制作动画 / Animate Any Character in Any World


1️⃣ 一句话总结

这篇论文提出了一个名为AniX的系统,它能让用户通过自然语言指令,在给定的3D场景中自由控制指定角色执行各种开放式的动作,并生成高质量、连贯的视频。

源自 arXiv: 2512.17796