RigMo:统一骨架与运动学习的生成式动画框架 / RigMo: Unifying Rig and Motion Learning for Generative Animation
1️⃣ 一句话总结
这篇论文提出了一个名为RigMo的统一生成框架,它能够直接从原始网格序列中同时学习出动画的骨架结构和运动模式,无需人工标注,从而实现了更可扩展、可解释且结构感知的动态3D建模。
Despite significant progress in 4D generation, rig and motion, the core structural and dynamic components of animation are typically modeled as separate problems. Existing pipelines rely on ground-truth skeletons and skinning weights for motion generation and treat auto-rigging as an independent process, undermining scalability and interpretability. We present RigMo, a unified generative framework that jointly learns rig and motion directly from raw mesh sequences, without any human-provided rig annotations. RigMo encodes per-vertex deformations into two compact latent spaces: a rig latent that decodes into explicit Gaussian bones and skinning weights, and a motion latent that produces time-varying SE(3) transformations. Together, these outputs define an animatable mesh with explicit structure and coherent motion, enabling feed-forward rig and motion inference for deformable objects. Beyond unified rig-motion discovery, we introduce a Motion-DiT model operating in RigMo's latent space and demonstrate that these structure-aware latents can naturally support downstream motion generation tasks. Experiments on DeformingThings4D, Objaverse-XL, and TrueBones demonstrate that RigMo learns smooth, interpretable, and physically plausible rigs, while achieving superior reconstruction and category-level generalization compared to existing auto-rigging and deformation baselines. RigMo establishes a new paradigm for unified, structure-aware, and scalable dynamic 3D modeling.
RigMo:统一骨架与运动学习的生成式动画框架 / RigMo: Unifying Rig and Motion Learning for Generative Animation
这篇论文提出了一个名为RigMo的统一生成框架,它能够直接从原始网格序列中同时学习出动画的骨架结构和运动模式,无需人工标注,从而实现了更可扩展、可解释且结构感知的动态3D建模。
源自 arXiv: 2601.06378