菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Beyond Output Correctness: Benchmarking and Evaluating Large Language Model Reasoning in Coding Tasks

Large language models (LLMs) increasingly rely on explicit reasoning to solve coding tasks, yet evaluating the quality of this reasoning remains challenging. Existing reasoning evaluators are not designed for coding, and current benchmarks focus primarily on code generation, leaving other coding tasks largely unexplored. We introduce CodeRQ-Bench, the first benchmark for evaluating LLM reasoning quality across three coding task categories: generation, summarization, and classification. Using this benchmark, we analyze 1,069 mismatch cases from existing evaluators, identify five recurring limitations, and derive four design insights for reasoning evaluation in coding tasks. Guided by these insights, we propose VERA, a two-stage evaluator that combines evidence-grounded verification with ambiguity-aware score correction. Experiments on CodeRQ-Bench show that VERA consistently outperforms strong baselines across four datasets, improving AUCROC by up to 0.26 and AUPRC by up to 0.21. We release CodeRQ-Bench at this https URL, supporting future investigations.

顶级标签: llm model evaluation benchmark
详细标签: reasoning evaluation code generation benchmarking verification coding tasks 或 搜索:

超越输出正确性:在编码任务中基准测试和评估大型语言模型的推理能力 / Beyond Output Correctness: Benchmarking and Evaluating Large Language Model Reasoning in Coding Tasks


1️⃣ 一句话总结

这篇论文提出了首个专门用于评估大语言模型在多种编码任务(如生成、总结、分类)中推理质量的基准测试CodeRQ-Bench,并基于此设计了一个名为VERA的两阶段评估器,该评估器通过结合证据验证和模糊感知评分修正,显著提升了推理质量评估的准确性。

源自 arXiv: 2604.12379