菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-23
📄 Abstract - DryRUN: On the Role of Public Tests in LLM-Driven Code Generation

Multi-agent frameworks are widely used in autonomous code generation and have applications in complex algorithmic problem-solving. Recent work has addressed the challenge of generating functionally correct code by incorporating simulation-driven planning and debugging, where language models trace execution steps to verify logic. However, these approaches depend on human-provided public test cases to ground the debugging and simulation loop. Manually authoring comprehensive input-output examples is a labor-intensive bottleneck in the software development lifecycle. Because ground-truth input-output examples are rarely available prior to implementation in real-world software engineering, this dependency restricts methods to curated competitive programming benchmarks. Furthermore, we identify that reliance on these public tests induces an ``overconfidence gap,'' causing frameworks to overfit to simplistic examples and fail on hidden evaluations. In contrast, we observe that external sample inputs are not strictly necessary for code generation. We demonstrate that large language models can autonomously generate valid inputs and simulate execution traces to self-correct. Consequently, we develop DryRUN, a framework that eliminates the need for ground-truth samples by allowing the LLM to iteratively plan, autonomously generate its own inputs and simulate execution, mitigating algorithmic overconfidence. Evaluations on the LiveCodeBench v6 dataset (post-March 2025) demonstrate that DryRUN matches performance against CodeSIM, a state-of-the-art and public-test-dependent framework, while operating entirely without public test cases or external execution feedback while reducing output token consumption.

顶级标签: llm agents
详细标签: code generation multi-agent public tests overconfidence gap self-correction 或 搜索:

DryRUN:公共测试在LLM驱动代码生成中的作用 / DryRUN: On the Role of Public Tests in LLM-Driven Code Generation


1️⃣ 一句话总结

本文指出当前多智能体代码生成框架过度依赖人工提供的测试用例,导致模型在隐藏测试中表现不佳,并提出了DryRUN框架,让大语言模型自主生成输入并模拟执行过程来自我纠错,从而无需任何真实测试用例即可达到甚至超越现有方法的性能。

源自 arXiv: 2604.21598