StereoVGGT:一种用于立体视觉的免训练视觉几何Transformer / StereoVGGT: A Training-Free Visual Geometry Transformer for Stereo Vision
1️⃣ 一句话总结
这篇论文提出了一种名为StereoVGGT的新方法,它巧妙改造了一个已预训练好的三维视觉基础模型,无需额外训练就能使其更好地处理立体图像匹配和转换任务,并在权威测试中取得了领先的性能。
Driven by the advancement of 3D devices, stereo vision tasks including stereo matching and stereo conversion have emerged as a critical research frontier. Contemporary stereo vision backbones typically rely on either monocular depth estimation (MDE) models or visual foundation models (VFMs). Crucially, these models are predominantly pretrained without explicit supervision of camera poses. Given that such geometric knowledge is indispensable for stereo vision, the absence of explicit spatial constraints constitutes a significant performance bottleneck for existing architectures. Recognizing that the Visual Geometry Grounded Transformer (VGGT) operates as a foundation model pretrained on extensive 3D priors, including camera poses, we investigate its potential as a robust backbone for stereo vision tasks. Nevertheless, empirical results indicate that its direct application to stereo vision yields suboptimal performance. We observe that VGGT suffers from a more significant degradation of geometric details during feature extraction. Such characteristics conflict with the requirements of binocular stereo vision, thereby constraining its efficacy for relative tasks. To bridge this gap, we propose StereoVGGT, a feature backbone specifically tailored for stereo vision. By leveraging the frozen VGGT and introducing a training-free feature adjustment pipeline, we mitigate geometric degradation and harness the latent camera calibration knowledge embedded within the model. StereoVGGT-based stereo matching network achieved the $1^{st}$ rank among all published methods on the KITTI benchmark, validating that StereoVGGT serves as a highly effective backbone for stereo vision.
StereoVGGT:一种用于立体视觉的免训练视觉几何Transformer / StereoVGGT: A Training-Free Visual Geometry Transformer for Stereo Vision
这篇论文提出了一种名为StereoVGGT的新方法,它巧妙改造了一个已预训练好的三维视觉基础模型,无需额外训练就能使其更好地处理立体图像匹配和转换任务,并在权威测试中取得了领先的性能。
源自 arXiv: 2603.29368