菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification

Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.

顶级标签: llm benchmark model evaluation
详细标签: mathematical reasoning automated verification unsolved problems ai research data contamination 或 搜索:

HorizonMath:通过自动验证衡量AI在数学发现上的进展 / HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification


1️⃣ 一句话总结

这篇论文提出了一个名为HorizonMath的基准测试,包含100多个未解决的数学问题,旨在通过自动验证来评估AI是否能在数学研究中做出真正的新发现,并初步展示了先进模型在其中两个问题上取得了优于已知结果的潜在突破。

源自 arXiv: 2603.15617