GarmentPainter:基于角色引导扩散模型的高效三维服装纹理合成 / GarmentPainter: Efficient 3D Garment Texture Synthesis with Character-Guided Diffusion Model
1️⃣ 一句话总结
这篇论文提出了一种名为GarmentPainter的新方法,它利用角色参考图片和三维结构信息来高效生成高质量、全局一致的三维服装纹理,解决了现有方法在三维一致性和灵活性上的不足。
Generating high-fidelity, 3D-consistent garment textures remains a challenging problem due to the inherent complexities of garment structures and the stringent requirement for detailed, globally consistent texture synthesis. Existing approaches either rely on 2D-based diffusion models, which inherently struggle with 3D consistency, require expensive multi-step optimization or depend on strict spatial alignment between 2D reference images and 3D meshes, which limits their flexibility and scalability. In this work, we introduce GarmentPainter, a simple yet efficient framework for synthesizing high-quality, 3D-aware garment textures in UV space. Our method leverages a UV position map as the 3D structural guidance, ensuring texture consistency across the garment surface during texture generation. To enhance control and adaptability, we introduce a type selection module, enabling fine-grained texture generation for specific garment components based on a character reference image, without requiring alignment between the reference image and the 3D mesh. GarmentPainter efficiently integrates all guidance signals into the input of a diffusion model in a spatially aligned manner, without modifying the underlying UNet architecture. Extensive experiments demonstrate that GarmentPainter achieves state-of-the-art performance in terms of visual fidelity, 3D consistency, and computational efficiency, outperforming existing methods in both qualitative and quantitative evaluations.
GarmentPainter:基于角色引导扩散模型的高效三维服装纹理合成 / GarmentPainter: Efficient 3D Garment Texture Synthesis with Character-Guided Diffusion Model
这篇论文提出了一种名为GarmentPainter的新方法,它利用角色参考图片和三维结构信息来高效生成高质量、全局一致的三维服装纹理,解决了现有方法在三维一致性和灵活性上的不足。
源自 arXiv: 2603.08228