菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - GPSBench: Do Large Language Models Understand GPS Coordinates?

Large Language Models (LLMs) are increasingly deployed in applications that interact with the physical world, such as navigation, robotics, or mapping, making robust geospatial reasoning a critical capability. Despite that, LLMs' ability to reason about GPS coordinates and real-world geography remains underexplored. We introduce GPSBench, a dataset of 57,800 samples across 17 tasks for evaluating geospatial reasoning in LLMs, spanning geometric coordinate operations (e.g., distance and bearing computation) and reasoning that integrates coordinates with world knowledge. Focusing on intrinsic model capabilities rather than tool use, we evaluate 14 state-of-the-art LLMs and find that GPS reasoning remains challenging, with substantial variation across tasks: models are generally more reliable at real-world geographic reasoning than at geometric computations. Geographic knowledge degrades hierarchically, with strong country-level performance but weak city-level localization, while robustness to coordinate noise suggests genuine coordinate understanding rather than memorization. We further show that GPS-coordinate augmentation can improve in downstream geospatial tasks, and that finetuning induces trade-offs between gains in geometric computation and degradation in world knowledge. Our dataset and reproducible code are available at this https URL

顶级标签: llm natural language processing model evaluation
详细标签: geospatial reasoning gps coordinates benchmark world knowledge intrinsic evaluation 或 搜索:

GPSBench:大型语言模型理解GPS坐标吗? / GPSBench: Do Large Language Models Understand GPS Coordinates?


1️⃣ 一句话总结

这篇论文通过发布一个包含5.78万个样本的GPSBench数据集来评估大型语言模型的地理空间推理能力,发现模型在真实世界地理知识上表现尚可,但在精确的几何坐标计算上仍有很大挑战,并且通过微调可以在某些任务上提升,但会牺牲部分世界知识。

源自 arXiv: 2602.16105