菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-22
📄 Abstract - ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion

Generating animated 3D objects is at the heart of many applications, yet most advanced works are typically difficult to apply in practice because of their limited setup, their long runtime, or their limited quality. We introduce ActionMesh, a generative model that predicts production-ready 3D meshes "in action" in a feed-forward manner. Drawing inspiration from early video models, our key insight is to modify existing 3D diffusion models to include a temporal axis, resulting in a framework we dubbed "temporal 3D diffusion". Specifically, we first adapt the 3D diffusion stage to generate a sequence of synchronized latents representing time-varying and independent 3D shapes. Second, we design a temporal 3D autoencoder that translates a sequence of independent shapes into the corresponding deformations of a pre-defined reference shape, allowing us to build an animation. Combining these two components, ActionMesh generates animated 3D meshes from different inputs like a monocular video, a text description, or even a 3D mesh with a text prompt describing its animation. Besides, compared to previous approaches, our method is fast and produces results that are rig-free and topology consistent, hence enabling rapid iteration and seamless applications like texturing and retargeting. We evaluate our model on standard video-to-4D benchmarks (Consistent4D, Objaverse) and report state-of-the-art performances on both geometric accuracy and temporal consistency, demonstrating that our model can deliver animated 3D meshes with unprecedented speed and quality.

顶级标签: computer vision multi-modal aigc
详细标签: 3d mesh generation temporal diffusion video-to-4d animated 3d diffusion models 或 搜索:

ActionMesh:基于时序3D扩散的动画3D网格生成 / ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion


1️⃣ 一句话总结

这篇论文提出了一个名为ActionMesh的快速生成模型,它通过引入时序3D扩散技术,能够直接从视频、文字或静态3D模型等输入,一键生成高质量、可直接用于生产流程的动画3D网格模型。

源自 arXiv: 2601.16148