菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-27
📄 Abstract - Understanding the Limits of Automated Evaluation for Code Review Bots in Practice

Automated code review (ACR) bots are increasingly used in industrial software development to assist developers during pull request (PR) review. As adoption grows, a key challenge is how to evaluate the usefulness of bot-generated comments reliably and at scale. In practice, such evaluation often relies on developer actions and annotations that are shaped by contextual and organizational factors, complicating their use as objective ground truth. We examine the feasibility and limitations of automating the evaluation of LLM-powered ACR bots in an industrial setting. We analyze an industrial dataset from Beko comprising 2,604 bot-generated PR comments, each labeled by software engineers as fixed/wontFix. Two automated evaluation approaches, G-Eval and an LLM-as-a-Judge pipeline, are applied using both binary decisions and a 0-4 Likert-scale formulation, enabling a controlled comparison against developer-provided labels. Across Gemini-2.5-pro, GPT-4.1-mini, and GPT-5.2, both evaluation strategies achieve only moderate alignment with human labels. Agreement ratios range from approximately 0.44 to 0.62, with noticeable variation across models and between binary and Likert-scale formulations, indicating sensitivity to both model choice and evaluation design. Our findings highlight practical limitations in fully automating the evaluation of ACR bot comments in industrial contexts. Developer actions such as resolving or ignoring comments reflect not only comment quality, but also contextual constraints, prioritization decisions, and workflow dynamics that are difficult to capture through static artifacts. Insights from a follow-up interview with a software engineering director further corroborate that developer labeling behavior is strongly influenced by workflow pressures and organizational constraints, reinforcing the challenges of treating such signals as objective ground truth.

顶级标签: llm model evaluation machine learning
详细标签: code review evaluation automation industrial llm-as-a-judge 或 搜索:

理解代码审查机器人自动评估在实际应用中的局限性 / Understanding the Limits of Automated Evaluation for Code Review Bots in Practice


1️⃣ 一句话总结

这篇论文通过分析工业界真实数据和多个AI模型,发现完全依靠自动化方法评估代码审查机器人(ACR)的评论质量,效果有限,因为开发者的标签行为会受到工作流程和组织压力的影响,并非客观标准。

源自 arXiv: 2604.24525