菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-18
📄 Abstract - Depth Any Panoramas: A Foundation Model for Panoramic Depth Estimation

In this work, we present a panoramic metric depth foundation model that generalizes across diverse scene distances. We explore a data-in-the-loop paradigm from the view of both data construction and framework design. We collect a large-scale dataset by combining public datasets, high-quality synthetic data from our UE5 simulator and text-to-image models, and real panoramic images from the web. To reduce domain gaps between indoor/outdoor and synthetic/real data, we introduce a three-stage pseudo-label curation pipeline to generate reliable ground truth for unlabeled images. For the model, we adopt DINOv3-Large as the backbone for its strong pre-trained generalization, and introduce a plug-and-play range mask head, sharpness-centric optimization, and geometry-centric optimization to improve robustness to varying distances and enforce geometric consistency across views. Experiments on multiple benchmarks (e.g., Stanford2D3D, Matterport3D, and Deep360) demonstrate strong performance and zero-shot generalization, with particularly robust and stable metric predictions in diverse real-world scenes. The project page can be found at: \href{this https URL} {this https URL\_website/}

顶级标签: computer vision multi-modal model training
详细标签: depth estimation panoramic images domain adaptation foundation model zero-shot generalization 或 搜索:

全景深度任意:一个用于全景深度估计的基础模型 / Depth Any Panoramas: A Foundation Model for Panoramic Depth Estimation


1️⃣ 一句话总结

这项研究提出了一个能够适应不同场景距离的全景深度估计基础模型,它通过整合大规模混合数据集、创新的伪标签生成流程以及多项优化技术,实现了在各种真实场景中鲁棒且准确的零样本深度预测。


源自 arXiv: 2512.16913