菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - FeatureBench: Benchmarking Agentic Coding for Complex Feature Development

Agents powered by large language models (LLMs) are increasingly adopted in the software industry, contributing code as collaborators or even autonomous developers. As their presence grows, it becomes important to assess the current boundaries of their coding abilities. Existing agentic coding benchmarks, however, cover a limited task scope, e.g., bug fixing within a single pull request (PR), and often rely on non-executable evaluations or lack an automated approach for continually updating the evaluation coverage. To address such issues, we propose FeatureBench, a benchmark designed to evaluate agentic coding performance in end-to-end, feature-oriented software development. FeatureBench incorporates an execution-based evaluation protocol and a scalable test-driven method that automatically derives tasks from code repositories with minimal human effort. By tracing from unit tests along a dependency graph, our approach can identify feature-level coding tasks spanning multiple commits and PRs scattered across the development timeline, while ensuring the proper functioning of other features after the separation. Using this framework, we curated 200 challenging evaluation tasks and 3825 executable environments from 24 open-source repositories in the first version of our benchmark. Empirical evaluation reveals that the state-of-the-art agentic model, such as Claude 4.5 Opus, which achieves a 74.4% resolved rate on SWE-bench, succeeds on only 11.0% of tasks, opening new opportunities for advancing agentic coding. Moreover, benefiting from our automated task collection toolkit, FeatureBench can be easily scaled and updated over time to mitigate data leakage. The inherent verifiability of constructed environments also makes our method potentially valuable for agent training.

顶级标签: llm agents benchmark
详细标签: agentic coding software development execution-based evaluation test-driven code repositories 或 搜索:

FeatureBench:面向复杂功能开发的智能体编码能力基准测试 / FeatureBench: Benchmarking Agentic Coding for Complex Feature Development


1️⃣ 一句话总结

这篇论文提出了一个名为FeatureBench的新基准测试,用于全面评估AI编程助手在开发完整软件功能时的真实能力,它通过自动从开源项目中提取可执行的测试任务,发现当前最先进的AI模型在复杂功能开发上的成功率仍然很低,仅为11%。

源自 arXiv: 2602.10975