菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-06
📄 Abstract - InfiniDepth: Arbitrary-Resolution and Fine-Grained Depth Estimation with Neural Implicit Fields

Existing depth estimation methods are fundamentally limited to predicting depth on discrete image grids. Such representations restrict their scalability to arbitrary output resolutions and hinder the geometric detail recovery. This paper introduces InfiniDepth, which represents depth as neural implicit fields. Through a simple yet effective local implicit decoder, we can query depth at continuous 2D coordinates, enabling arbitrary-resolution and fine-grained depth estimation. To better assess our method's capabilities, we curate a high-quality 4K synthetic benchmark from five different games, spanning diverse scenes with rich geometric and appearance details. Extensive experiments demonstrate that InfiniDepth achieves state-of-the-art performance on both synthetic and real-world benchmarks across relative and metric depth estimation tasks, particularly excelling in fine-detail regions. It also benefits the task of novel view synthesis under large viewpoint shifts, producing high-quality results with fewer holes and artifacts.

顶级标签: computer vision model training model evaluation
详细标签: depth estimation neural implicit fields arbitrary resolution novel view synthesis benchmark 或 搜索:

InfiniDepth:基于神经隐式场的任意分辨率与细粒度深度估计 / InfiniDepth: Arbitrary-Resolution and Fine-Grained Depth Estimation with Neural Implicit Fields


1️⃣ 一句话总结

这篇论文提出了一种名为InfiniDepth的新方法,它通过将深度表示为神经隐式场,实现了在任意图像分辨率下都能连续、精细地估计物体深度,并在多种任务中取得了领先的性能。

源自 arXiv: 2601.03252