快速基础立体视觉:实时零样本立体匹配 / Fast-FoundationStereo: Real-Time Zero-Shot Stereo Matching
1️⃣ 一句话总结
这篇论文提出了一种名为Fast-FoundationStereo的新型立体视觉架构,它首次在保持强大零样本泛化能力的同时,实现了实时运行速度,比之前的先进模型快10倍以上。
Stereo foundation models achieve strong zero-shot generalization but remain computationally prohibitive for real-time applications. Efficient stereo architectures, on the other hand, sacrifice robustness for speed and require costly per-domain fine-tuning. To bridge this gap, we present Fast-FoundationStereo, a family of architectures that achieve, for the first time, strong zero-shot generalization at real-time frame rate. We employ a divide-and-conquer acceleration strategy with three components: (1) knowledge distillation to compress the hybrid backbone into a single efficient student; (2) blockwise neural architecture search for automatically discovering optimal cost filtering designs under latency budgets, reducing search complexity exponentially; and (3) structured pruning for eliminating redundancy in the iterative refinement module. Furthermore, we introduce an automatic pseudo-labeling pipeline used to curate 1.4M in-the-wild stereo pairs to supplement synthetic training data and facilitate knowledge distillation. The resulting model can run over 10x faster than FoundationStereo while closely matching its zero-shot accuracy, thus establishing a new state-of-the-art among real-time methods. Project page: this https URL
快速基础立体视觉:实时零样本立体匹配 / Fast-FoundationStereo: Real-Time Zero-Shot Stereo Matching
这篇论文提出了一种名为Fast-FoundationStereo的新型立体视觉架构,它首次在保持强大零样本泛化能力的同时,实现了实时运行速度,比之前的先进模型快10倍以上。
源自 arXiv: 2512.11130