菜单

🤖 系统
📄 Abstract - Fast3Dcache: Training-free 3D Geometry Synthesis Acceleration

Diffusion models have achieved impressive generative quality across modalities like 2D images, videos, and 3D shapes, but their inference remains computationally expensive due to the iterative denoising process. While recent caching-based methods effectively reuse redundant computations to speed up 2D and video generation, directly applying these techniques to 3D diffusion models can severely disrupt geometric consistency. In 3D synthesis, even minor numerical errors in cached latent features accumulate, causing structural artifacts and topological inconsistencies. To overcome this limitation, we propose Fast3Dcache, a training-free geometry-aware caching framework that accelerates 3D diffusion inference while preserving geometric fidelity. Our method introduces a Predictive Caching Scheduler Constraint (PCSC) to dynamically determine cache quotas according to voxel stabilization patterns and a Spatiotemporal Stability Criterion (SSC) to select stable features for reuse based on velocity magnitude and acceleration criterion. Comprehensive experiments show that Fast3Dcache accelerates inference significantly, achieving up to a 27.12% speed-up and a 54.8% reduction in FLOPs, with minimal degradation in geometric quality as measured by Chamfer Distance (2.48%) and F-Score (1.95%).

顶级标签: model training computer vision aigc
详细标签: 3d generation diffusion models inference acceleration computational efficiency geometry synthesis 或 搜索:

Fast3Dcache:无需训练的3D几何合成加速方法 / Fast3Dcache: Training-free 3D Geometry Synthesis Acceleration


1️⃣ 一句话总结

本文提出了一种名为Fast3Dcache的新方法,它能在不重新训练模型的前提下,通过智能地复用计算过程中稳定的中间结果,显著加快3D模型的生成速度,同时有效避免了因直接套用2D加速技术而导致的3D几何结构变形问题。


📄 打开原文 PDF