菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-07
📄 Abstract - MeshSplatting: Differentiable Rendering with Opaque Meshes

Primitive-based splatting methods like 3D Gaussian Splatting have revolutionized novel view synthesis with real-time rendering. However, their point-based representations remain incompatible with mesh-based pipelines that power AR/VR and game engines. We present MeshSplatting, a mesh-based reconstruction approach that jointly optimizes geometry and appearance through differentiable rendering. By enforcing connectivity via restricted Delaunay triangulation and refining surface consistency, MeshSplatting creates end-to-end smooth, visually high-quality meshes that render efficiently in real-time 3D engines. On Mip-NeRF360, it boosts PSNR by +0.69 dB over the current state-of-the-art MiLo for mesh-based novel view synthesis, while training 2x faster and using 2x less memory, bridging neural rendering and interactive 3D graphics for seamless real-time scene interaction. The project page is available at this https URL.

顶级标签: computer vision model training systems
详细标签: differentiable rendering novel view synthesis mesh reconstruction 3d gaussian splatting real-time rendering 或 搜索:

网格溅射:基于不透明网格的可微分渲染 / MeshSplatting: Differentiable Rendering with Opaque Meshes


1️⃣ 一句话总结

这篇论文提出了一种名为MeshSplatting的新方法,它能够将先进的神经渲染技术与传统的3D游戏引擎相结合,通过优化网格的几何形状和外观来高效生成高质量、可实时渲染的3D场景模型。


源自 arXiv: 2512.06818