面向稀疏视图高斯溅射的内在几何-外观一致性优化 / Intrinsic Geometry-Appearance Consistency Optimization for Sparse-View Gaussian Splatting
1️⃣ 一句话总结
这篇论文提出了一种名为MVD-HuGaS的新方法,它能够仅用一张人物照片,通过一个经过优化的多视图扩散模型生成多个角度的图像,并联合优化3D模型和相机姿态,最终重建出可以从任意角度自由观看、细节逼真的3D数字人。
3D human reconstruction from a single image is a challenging problem and has been exclusively studied in the literature. Recently, some methods have resorted to diffusion models for guidance, optimizing a 3D representation via Score Distillation Sampling(SDS) or generating a back-view image for facilitating reconstruction. However, these methods tend to produce unsatisfactory artifacts (\textit{e.g.} flattened human structure or over-smoothing results caused by inconsistent priors from multiple views) and struggle with real-world generalization in the wild. In this work, we present \emph{MVD-HuGaS}, enabling free-view 3D human rendering from a single image via a multi-view human diffusion model. We first generate multi-view images from the single reference image with an enhanced multi-view diffusion model, which is well fine-tuned on high-quality 3D human datasets to incorporate 3D geometry priors and human structure priors. To infer accurate camera poses from the sparse generated multi-view images for reconstruction, an alignment module is introduced to facilitate joint optimization of 3D Gaussians and camera poses. Furthermore, we propose a depth-based Facial Distortion Mitigation module to refine the generated facial regions, thereby improving the overall fidelity of the reconstruction. Finally, leveraging the refined multi-view images, along with their accurate camera poses, MVD-HuGaS optimizes the 3D Gaussians of the target human for high-fidelity free-view renderings. Extensive experiments on Thuman2.0 and 2K2K datasets show that the proposed MVD-HuGaS achieves state-of-the-art performance on single-view 3D human rendering.
面向稀疏视图高斯溅射的内在几何-外观一致性优化 / Intrinsic Geometry-Appearance Consistency Optimization for Sparse-View Gaussian Splatting
这篇论文提出了一种名为MVD-HuGaS的新方法,它能够仅用一张人物照片,通过一个经过优化的多视图扩散模型生成多个角度的图像,并联合优化3D模型和相机姿态,最终重建出可以从任意角度自由观看、细节逼真的3D数字人。
源自 arXiv: 2603.02893