菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-12
📄 Abstract - TopoBench: Benchmarking LLMs on Hard Topological Reasoning

Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. These interventions show that certain error patterns like premature commitment and constraint forgetting have a direct impact on the ability to solve the puzzle, while repeated reasoning is a benign effect of search. Finally we study mitigation strategies including prompt guidance, cell-aligned grid representations and tool-based constraint checking, finding that the bottleneck lies in extracting constraints from spatial representations and not in reasoning over them. Code and data are available at this http URL.

顶级标签: llm benchmark model evaluation
详细标签: topological reasoning spatial reasoning chain of thought error analysis puzzle solving 或 搜索:

TopoBench:针对大语言模型在复杂拓扑推理任务上的基准测试 / TopoBench: Benchmarking LLMs on Hard Topological Reasoning


1️⃣ 一句话总结

这篇论文提出了一个名为TopoBench的基准测试,用于评估大语言模型在解决涉及连通性、环路闭合等复杂空间关系的拓扑谜题时的能力,研究发现当前最先进的模型在难题上表现不佳,其核心瓶颈主要在于从空间表示中提取约束条件,而非对这些约束进行推理。

源自 arXiv: 2603.12133