任意分辨率任意几何:从多视角到多块 / Any Resolution Any Geometry: From Multi-View To Multi-Patch
1️⃣ 一句话总结
这篇论文提出了一个名为URGT的多块Transformer模型,它通过将高分辨率图像分割成多个小块并利用跨块注意力机制进行联合处理,在单张图像上同时实现了高精度的深度和表面法线估计,显著提升了细节保持与全局一致性,并在多个指标上取得了领先的性能。
Joint estimation of surface normals and depth is essential for holistic 3D scene understanding, yet high-resolution prediction remains difficult due to the trade-off between preserving fine local detail and maintaining global consistency. To address this challenge, we propose the Ultra Resolution Geometry Transformer (URGT), which adapts the Visual Geometry Grounded Transformer (VGGT) into a unified multi-patch transformer for monocular high-resolution depth--normal estimation. A single high-resolution image is partitioned into patches that are augmented with coarse depth and normal priors from pre-trained models, and jointly processed in a single forward pass to predict refined geometric outputs. Global coherence is enforced through cross-patch attention, which enables long-range geometric reasoning and seamless propagation of information across patches within a shared backbone. To further enhance spatial robustness, we introduce a GridMix patch sampling strategy that probabilistically samples grid configurations during training, improving inter-patch consistency and generalization. Our method achieves state-of-the-art results on UnrealStereo4K, jointly improving depth and normal estimation, reducing AbsRel from 0.0582 to 0.0291, RMSE from 2.17 to 1.31, and lowering mean angular error from 23.36 degrees to 18.51 degrees, while producing sharper and more stable geometry. The proposed multi-patch framework also demonstrates strong zero-shot and cross-domain generalization and scales effectively to very high resolutions, offering an efficient and extensible solution for high-quality geometry refinement.
任意分辨率任意几何:从多视角到多块 / Any Resolution Any Geometry: From Multi-View To Multi-Patch
这篇论文提出了一个名为URGT的多块Transformer模型,它通过将高分辨率图像分割成多个小块并利用跨块注意力机制进行联合处理,在单张图像上同时实现了高精度的深度和表面法线估计,显著提升了细节保持与全局一致性,并在多个指标上取得了领先的性能。
源自 arXiv: 2603.03026