📄
Abstract - MetricAnything: Scaling Metric Depth Pretraining with Noisy Heterogeneous Sources
Scaling has powered recent advances in vision foundation models, yet extending this paradigm to metric depth estimation remains challenging due to heterogeneous sensor noise, camera-dependent biases, and metric ambiguity in noisy cross-source 3D data. We introduce Metric Anything, a simple and scalable pretraining framework that learns metric depth from noisy, diverse 3D sources without manually engineered prompts, camera-specific modeling, or task-specific architectures. Central to our approach is the Sparse Metric Prompt, created by randomly masking depth maps, which serves as a universal interface that decouples spatial reasoning from sensor and camera biases. Using about 20M image-depth pairs spanning reconstructed, captured, and rendered 3D data across 10000 camera models, we demonstrate-for the first time-a clear scaling trend in the metric depth track. The pretrained model excels at prompt-driven tasks such as depth completion, super-resolution and Radar-camera fusion, while its distilled prompt-free student achieves state-of-the-art results on monocular depth estimation, camera intrinsics recovery, single/multi-view metric 3D reconstruction, and VLA planning. We also show that using pretrained ViT of Metric Anything as a visual encoder significantly boosts Multimodal Large Language Model capabilities in spatial intelligence. These results show that metric depth estimation can benefit from the same scaling laws that drive modern foundation models, establishing a new path toward scalable and efficient real-world metric perception. We open-source MetricAnything at this http URL to support community research.
MetricAnything:利用噪声异构数据源扩展度量深度预训练 /
MetricAnything: Scaling Metric Depth Pretraining with Noisy Heterogeneous Sources
1️⃣ 一句话总结
这篇论文提出了一个名为MetricAnything的通用预训练框架,它能够利用大量来源各异、带有噪声的3D数据来学习度量深度信息,从而首次证明了度量深度估计任务也能像其他视觉基础模型一样受益于数据规模扩展定律,并显著提升了模型在多种下游任务(如深度补全、3D重建和空间智能理解)上的性能。