OmniEarth:一个用于评估视觉语言模型在地理空间任务中表现的基准 / OmniEarth: A Benchmark for Evaluating Vision-Language Models in Geospatial Tasks
1️⃣ 一句话总结
这篇论文提出了一个名为OmniEarth的综合性基准测试,专门用于系统评估视觉语言模型在遥感与地球观测等复杂地理空间任务中的感知、推理和鲁棒性能力,揭示了现有模型在此领域的不足。
Vision-Language Models (VLMs) have demonstrated effective perception and reasoning capabilities on general-domain tasks, leading to growing interest in their application to Earth observation. However, a systematic benchmark for comprehensively evaluating remote sensing vision-language models (RSVLMs) remains lacking. To address this gap, we introduce OmniEarth, a benchmark for evaluating RSVLMs under realistic Earth observation scenarios. OmniEarth organizes tasks along three capability dimensions: perception, reasoning, and robustness. It defines 28 fine-grained tasks covering multi-source sensing data and diverse geospatial contexts. The benchmark supports two task formulations: multiple-choice VQA and open-ended VQA. The latter includes pure text outputs for captioning tasks, bounding box outputs for visual grounding tasks, and mask outputs for segmentation tasks. To reduce linguistic bias and examine whether model predictions rely on visual evidence, OmniEarth adopts a blind test protocol and a quintuple semantic consistency requirement. OmniEarth includes 9,275 carefully quality-controlled images, including proprietary satellite imagery from Jilin-1 (JL-1), along with 44,210 manually verified instructions. We conduct a systematic evaluation of contrastive learning-based models, general closed-source and open-source VLMs, as well as RSVLMs. Results show that existing VLMs still struggle with geospatially complex tasks, revealing clear gaps that need to be addressed for remote sensing applications. OmniEarth is publicly available at this https URL.
OmniEarth:一个用于评估视觉语言模型在地理空间任务中表现的基准 / OmniEarth: A Benchmark for Evaluating Vision-Language Models in Geospatial Tasks
这篇论文提出了一个名为OmniEarth的综合性基准测试,专门用于系统评估视觉语言模型在遥感与地球观测等复杂地理空间任务中的感知、推理和鲁棒性能力,揭示了现有模型在此领域的不足。
源自 arXiv: 2603.09471