只需放大:通过自回归缩放实现跨视角地理定位 / Just Zoom In: Cross-View Geo-Localization via Autoregressive Zooming
1️⃣ 一句话总结
这篇论文提出了一种名为‘只需放大’的新方法,通过让模型像人看地图一样,从城市全景图开始一步步放大到目标位置,来精准匹配街景照片和卫星图像,从而在无需GPS的情况下实现更准确的地理定位。
Cross-view geo-localization (CVGL) estimates a camera's location by matching a street-view image to geo-referenced overhead imagery, enabling GPS-denied localization and navigation. Existing methods almost universally formulate CVGL as an image-retrieval problem in a contrastively trained embedding space. This ties performance to large batches and hard negative mining, and it ignores both the geometric structure of maps and the coverage mismatch between street-view and overhead imagery. In particular, salient landmarks visible from the street view can fall outside a fixed satellite crop, making retrieval targets ambiguous and limiting explicit spatial inference over the map. We propose Just Zoom In, an alternative formulation that performs CVGL via autoregressive zooming over a city-scale overhead map. Starting from a coarse satellite view, the model takes a short sequence of zoom-in decisions to select a terminal satellite cell at a target resolution, without contrastive losses or hard negative mining. We further introduce a realistic benchmark with crowd-sourced street views and high-resolution satellite imagery that reflects real capture conditions. On this benchmark, Just Zoom In achieves state-of-the-art performance, improving Recall@1 within 50 m by 5.5% and Recall@1 within 100 m by 9.6% over the strongest contrastive-retrieval baseline. These results demonstrate the effectiveness of sequential coarse-to-fine spatial reasoning for cross-view geo-localization.
只需放大:通过自回归缩放实现跨视角地理定位 / Just Zoom In: Cross-View Geo-Localization via Autoregressive Zooming
这篇论文提出了一种名为‘只需放大’的新方法,通过让模型像人看地图一样,从城市全景图开始一步步放大到目标位置,来精准匹配街景照片和卫星图像,从而在无需GPS的情况下实现更准确的地理定位。
源自 arXiv: 2603.25686