菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-17
📄 Abstract - FrontierCS: Evolving Challenges for Evolving Intelligence

We introduce FrontierCS, a benchmark of 156 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts on both the algorithmic and research tracks, that increasing reasoning budgets alone does not close this gap, and that models often over-optimize for generating merely workable code instead of discovering high-quality algorithms and system designs.

顶级标签: benchmark model evaluation agents
详细标签: code generation algorithmic reasoning open-ended problems automatic evaluation expert-level performance 或 搜索:

FrontierCS:为不断进化的智能体设计不断演进的挑战 / FrontierCS: Evolving Challenges for Evolving Intelligence


1️⃣ 一句话总结

这篇论文提出了一个名为FrontierCS的新型计算机科学基准测试,它包含156个开放式问题,这些问题没有已知的最优解但可以客观评估方案质量,旨在衡量AI模型在解决前沿复杂问题(如算法设计和系统研究)上的真实能力,并发现当前最先进的模型仍远落后于人类专家。


源自 arXiv: 2512.15699