菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-12
📄 Abstract - Particulate: Feed-Forward 3D Object Articulation

We present Particulate, a feed-forward approach that, given a single static 3D mesh of an everyday object, directly infers all attributes of the underlying articulated structure, including its 3D parts, kinematic structure, and motion constraints. At its core is a transformer network, Part Articulation Transformer, which processes a point cloud of the input mesh using a flexible and scalable architecture to predict all the aforementioned attributes with native multi-joint support. We train the network end-to-end on a diverse collection of articulated 3D assets from public datasets. During inference, Particulate lifts the network's feed-forward prediction to the input mesh, yielding a fully articulated 3D model in seconds, much faster than prior approaches that require per-object optimization. Particulate can also accurately infer the articulated structure of AI-generated 3D assets, enabling full-fledged extraction of articulated 3D objects from a single (real or synthetic) image when combined with an off-the-shelf image-to-3D generator. We further introduce a new challenging benchmark for 3D articulation estimation curated from high-quality public 3D assets, and redesign the evaluation protocol to be more consistent with human preferences. Quantitative and qualitative results show that Particulate significantly outperforms state-of-the-art approaches.

顶级标签: computer vision 3d vision model training
详细标签: 3d object articulation kinematic structure transformer network mesh processing benchmark evaluation 或 搜索:

Particulate:一种用于三维物体关节结构推断的前馈方法 / Particulate: Feed-Forward 3D Object Articulation


1️⃣ 一句话总结

这篇论文提出了一个名为Particulate的前馈式AI模型,它能直接从单个静态3D物体模型中,快速推断出该物体的所有可动关节结构,包括部件划分、运动连接关系和约束条件,从而快速生成完整的可动3D模型。


源自 arXiv: 2512.11798