用于空间感知的掩码深度建模 / Masked Depth Modeling for Spatial Perception
1️⃣ 一句话总结
这篇论文提出了一种名为LingBot-Depth的深度补全模型,它通过将深度传感器的不准确信号视为‘掩码’,并利用视觉上下文进行修复,从而在精度和覆盖范围上超越了顶级RGB-D相机,并提供了跨RGB和深度模态的对齐表征。
Spatial visual perception is a fundamental requirement in physical-world applications like autonomous driving and robotic manipulation, driven by the need to interact with 3D environments. Capturing pixel-aligned metric depth using RGB-D cameras would be the most viable way, yet it usually faces obstacles posed by hardware limitations and challenging imaging conditions, especially in the presence of specular or texture-less surfaces. In this work, we argue that the inaccuracies from depth sensors can be viewed as "masked" signals that inherently reflect underlying geometric ambiguities. Building on this motivation, we present LingBot-Depth, a depth completion model which leverages visual context to refine depth maps through masked depth modeling and incorporates an automated data curation pipeline for scalable training. It is encouraging to see that our model outperforms top-tier RGB-D cameras in terms of both depth precision and pixel coverage. Experimental results on a range of downstream tasks further suggest that LingBot-Depth offers an aligned latent representation across RGB and depth modalities. We release the code, checkpoint, and 3M RGB-depth pairs (including 2M real data and 1M simulated data) to the community of spatial perception.
用于空间感知的掩码深度建模 / Masked Depth Modeling for Spatial Perception
这篇论文提出了一种名为LingBot-Depth的深度补全模型,它通过将深度传感器的不准确信号视为‘掩码’,并利用视觉上下文进行修复,从而在精度和覆盖范围上超越了顶级RGB-D相机,并提供了跨RGB和深度模态的对齐表征。
源自 arXiv: 2601.17895