CONCUR:用于评估大语言模型并发代码生成能力的基准 / CONCUR: Benchmarking LLMs for Concurrent Code Generation
1️⃣ 一句话总结
这篇论文提出了一个名为CONCUR的新基准测试,专门用于评估大语言模型生成复杂且易出错的并发代码的能力,填补了现有基准只关注顺序代码的空白。
Leveraging Large Language Models (LLMs) for code generation has increasingly emerged as a common practice in the domain of software engineering. Relevant benchmarks have been established to evaluate the code generation capabilities of LLMs. However, existing benchmarks focus primarily on sequential code, lacking the ability to effectively evaluate LLMs on concurrent code generation. Compared to sequential code, concurrent code exhibits greater complexity and possesses unique types of bugs, such as deadlocks and race conditions, that do not occur in sequential code. Therefore, a benchmark for evaluating sequential code generation cannot be useful for evaluating concurrent code generation with LLMs. To address this gap, we designed a benchmark CONCUR specifically aimed at evaluating the capability of LLMs to generate concurrent code. CONCUR consists of a base set of 43 concurrency problems derived from a standard concurrency textbook, together with 72 validated mutant variants, resulting in 115 total problems. The base problems serve as the semantic core of the benchmark, while the mutants expand linguistic and structural diversity. We conducted an evaluation of a range of LLMs on CONCUR, highlighting limitations of current models. Overall, our work provides a novel direction for evaluating the capability of LLMs to generate code with focus on concurrency.
CONCUR:用于评估大语言模型并发代码生成能力的基准 / CONCUR: Benchmarking LLMs for Concurrent Code Generation
这篇论文提出了一个名为CONCUR的新基准测试,专门用于评估大语言模型生成复杂且易出错的并发代码的能力,填补了现有基准只关注顺序代码的空白。
源自 arXiv: 2603.03683