ClassEval-Pro:面向跨领域类级代码生成的基准测试 / ClassEval-Pro: A Cross-Domain Benchmark for Class-Level Code Generation
1️⃣ 一句话总结
本文提出了一个名为ClassEval-Pro的自动化构建的基准测试,包含来自11个领域的300个类级编程任务,用于评估大语言模型在构建完整类代码方面的能力,结果发现当前最强模型仅能通过45.6%的任务,且方法间的协调是核心瓶颈。
LLMs have achieved strong results on both function-level code synthesis and repository-level code modification, yet a capability that falls between these two extremes -- compositional code creation, i.e., building a complete, internally structured class from a specification -- remains underserved. Current evaluations are either confined to isolated functions or rely on manually curated class-level tasks that are expensive to scale and increasingly susceptible to data contamination. We introduce ClassEval-Pro, a benchmark of 300 class-level tasks spanning 11 domains, constructed through an automated three-stage pipeline that combines complexity enhancement, cross-domain class composition, and integration of real-world GitHub code contributed after January 2025. Every task is validated by an LLM Judge Ensemble and must pass test suites with over 90% line coverage. We evaluate five frontier LLMs under five generation strategies. The best model achieves only 45.6% class-level Pass@1, with a 17.7-point gap between the strongest and weakest models, confirming the benchmark's discriminative power. Strategy choice strongly interacts with model capability: structured approaches such as bottom-up improve weaker models by up to 9.4 percentage points, while compositional generation collapses to as low as 1.3%. Error analysis over 500 manually annotated failures reveals that logic errors (56.2%) and dependency errors (38.0%) dominate, identifying cross-method coordination as the core bottleneck.
ClassEval-Pro:面向跨领域类级代码生成的基准测试 / ClassEval-Pro: A Cross-Domain Benchmark for Class-Level Code Generation
本文提出了一个名为ClassEval-Pro的自动化构建的基准测试,包含来自11个领域的300个类级编程任务,用于评估大语言模型在构建完整类代码方面的能力,结果发现当前最强模型仅能通过45.6%的任务,且方法间的协调是核心瓶颈。
源自 arXiv: 2604.26923