ShapeR:从随意拍摄的视频中稳健地生成条件化3D形状 / ShapeR: Robust Conditional 3D Shape Generation from Casual Captures
1️⃣ 一句话总结
这篇论文提出了一个名为ShapeR的新方法,它能从普通人随意拍摄的、包含遮挡和背景干扰的日常视频中,稳定地生成高质量、带精确尺寸的3D物体模型,性能远超现有技术。
Recent advances in 3D shape generation have achieved impressive results, but most existing methods rely on clean, unoccluded, and well-segmented inputs. Such conditions are rarely met in real-world scenarios. We present ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Given an image sequence, we leverage off-the-shelf visual-inertial SLAM, 3D detection algorithms, and vision-language models to extract, for each object, a set of sparse SLAM points, posed multi-view images, and machine-generated captions. A rectified flow transformer trained to effectively condition on these modalities then generates high-fidelity metric 3D shapes. To ensure robustness to the challenges of casually captured data, we employ a range of techniques including on-the-fly compositional augmentations, a curriculum training scheme spanning object- and scene-level datasets, and strategies to handle background clutter. Additionally, we introduce a new evaluation benchmark comprising 178 in-the-wild objects across 7 real-world scenes with geometry annotations. Experiments show that ShapeR significantly outperforms existing approaches in this challenging setting, achieving an improvement of 2.7x in Chamfer distance compared to state of the art.
ShapeR:从随意拍摄的视频中稳健地生成条件化3D形状 / ShapeR: Robust Conditional 3D Shape Generation from Casual Captures
这篇论文提出了一个名为ShapeR的新方法,它能从普通人随意拍摄的、包含遮挡和背景干扰的日常视频中,稳定地生成高质量、带精确尺寸的3D物体模型,性能远超现有技术。
源自 arXiv: 2601.11514