菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-11
📄 Abstract - MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos

Motion capture now underpins content creation far beyond digital humans, yet most existing pipelines remain species- or template-specific. We formalize this gap as Category-Agnostic Motion Capture (CAMoCap): given a monocular video and an arbitrary rigged 3D asset as a prompt, the goal is to reconstruct a rotation-based animation such as BVH that directly drives the specific asset. We present MoCapAnything, a reference-guided, factorized framework that first predicts 3D joint trajectories and then recovers asset-specific rotations via constraint-aware inverse kinematics. The system contains three learnable modules and a lightweight IK stage: (1) a Reference Prompt Encoder that extracts per-joint queries from the asset's skeleton, mesh, and rendered images; (2) a Video Feature Extractor that computes dense visual descriptors and reconstructs a coarse 4D deforming mesh to bridge the gap between video and joint space; and (3) a Unified Motion Decoder that fuses these cues to produce temporally coherent trajectories. We also curate Truebones Zoo with 1038 motion clips, each providing a standardized skeleton-mesh-render triad. Experiments on both in-domain benchmarks and in-the-wild videos show that MoCapAnything delivers high-quality skeletal animations and exhibits meaningful cross-species retargeting across heterogeneous rigs, enabling scalable, prompt-driven 3D motion capture for arbitrary assets. Project page: this https URL

顶级标签: computer vision multi-modal systems
详细标签: motion capture 3d animation inverse kinematics video understanding cross-species retargeting 或 搜索:

MoCapAnything:基于单目视频的任意骨骼统一三维动作捕捉 / MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos


1️⃣ 一句话总结

这篇论文提出了一个名为MoCapAnything的通用系统,能够仅凭一段普通视频和任意一个三维角色模型,就自动生成驱动该角色运动的动画数据,突破了传统动作捕捉技术对特定生物种类或模板的限制。


源自 arXiv: 2512.10881