菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-27
📄 Abstract - Fix Initial Codes and Iteratively Refine Textual Directions Toward Safe Multi-Turn Code Correction

Recent work on large language models (LLMs) has emphasized the importance of scaling inference compute. From this perspective, the state-of-the-art method Scattered Forest Search (SFS) has been proposed, employing Monte Carlo Tree Search with carefully crafted initial seeds and textual optimization for multi-turn code correction. However, its complexity makes it unclear what factors contribute to improvements in inference performance. To address this problem, we analyze SFS and propose a simpler method, Iterative Refinement of Textual Directions (IRTD), which fixes initial codes and iteratively refines textual directions. Because of the simplicity of IRTD, we theoretically establish the safety of IRTD using Oracle-Guided Inductive Synthesis (OGIS). Experiments on several code generation benchmarks suggest that IRTD achieves inference performance comparable to state-of-the-art methods. These results indicate that, even without complex search structures, refining initial codes with high-quality textual directions alone can effectively improve inference performance.

顶级标签: llm model evaluation
详细标签: code generation multi-turn correction iterative refinement inference scaling safety analysis 或 搜索:

固定初始代码并迭代优化文本方向以实现安全的多轮代码修正 / Fix Initial Codes and Iteratively Refine Textual Directions Toward Safe Multi-Turn Code Correction


1️⃣ 一句话总结

本文提出了一种名为IRTD的简单方法,通过固定初始代码并反复优化文本提示来逐步修正代码错误,无需复杂的搜索结构,就能达到与最先进方法相当的推理性能,并且理论上能保证修正过程的安全性。

源自 arXiv: 2604.23989