📄
Abstract - CodeSpecBench: Benchmarking LLMs for Executable Behavioral Specification Generation
Large language models (LLMs) can generate code from natural language, but the extent to which they capture intended program behavior remains unclear. Executable behavioral specifications, defined via preconditions and postconditions, provide a concrete means to assess such understanding. However, existing work on specification generation is constrained in evaluation methodology, task settings, and specification expressiveness. We introduce CodeSpecBench, a benchmark for executable behavioral specification generation under an execution-based evaluation protocol. CodeSpecBench supports both function-level and repository-level tasks and encodes specifications as executable Python functions. Constructed from diverse real-world codebases, it enables a realistic assessment of both correctness (accepting valid behaviors) and completeness (rejecting invalid behaviors). Evaluating 15 state-of-the-art LLMs on CodeSpecBench, we observe a sharp performance degradation on repository-level tasks, where the best model attains only a 20.2% pass rate. We further find that specification generation is substantially more challenging than code generation, indicating that strong coding performance does not necessarily reflect deep understanding of intended program semantics. Our data and code are available at this https URL.
CodeSpecBench:用于评估大语言模型生成可执行行为规范的基准 /
CodeSpecBench: Benchmarking LLMs for Executable Behavioral Specification Generation
1️⃣ 一句话总结
这篇论文提出了一个名为CodeSpecBench的新基准,用于评估大语言模型是否能生成准确且完整的可执行行为规范(即用代码定义程序的前置和后置条件),研究发现,即使是当前最先进的模型,在理解复杂程序语义和生成规范方面也面临巨大挑战,其表现远不如直接生成代码。