CodeTaste:大语言模型能生成人类水平的代码重构吗? / CodeTaste: Can LLMs Generate Human-Level Code Refactorings?
1️⃣ 一句话总结
这篇论文通过构建一个名为CodeTaste的基准测试,评估了大语言模型在代码重构任务上的表现,发现模型在详细指令下能可靠执行重构,但在自主发现并选择与人类开发者一致的重构方案方面仍有明显差距。
Large language model (LLM) coding agents can generate working code, but their solutions often accumulate complexity, duplication, and architectural debt. Human developers address such issues through refactoring: behavior-preserving program transformations that improve structure and maintainability. In this paper, we investigate if LLM agents (i) can execute refactorings reliably and (ii) identify the refactorings that human developers actually chose in real codebases. We present CodeTaste, a benchmark of refactoring tasks mined from large-scale multi-file changes in open-source repositories. To score solutions, we combine repository test suites with custom static checks that verify removal of undesired patterns and introduction of desired patterns using dataflow reasoning. Our experimental results indicate a clear gap across frontier models: agents perform well when refactorings are specified in detail, but often fail to discover the human refactoring choices when only presented with a focus area for improvement. A propose-then-implement decomposition improves alignment, and selecting the best-aligned proposal before implementation can yield further gains. CodeTaste provides an evaluation target and a potential preference signal for aligning coding agents with human refactoring decisions in realistic codebases.
CodeTaste:大语言模型能生成人类水平的代码重构吗? / CodeTaste: Can LLMs Generate Human-Level Code Refactorings?
这篇论文通过构建一个名为CodeTaste的基准测试,评估了大语言模型在代码重构任务上的表现,发现模型在详细指令下能可靠执行重构,但在自主发现并选择与人类开发者一致的重构方案方面仍有明显差距。
源自 arXiv: 2603.04177