菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-12
📄 Abstract - Controlled Self-Evolution for Algorithmic Code Optimization

Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks. To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels. Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at this https URL.

顶级标签: llm agents systems
详细标签: code optimization self-evolution genetic algorithm algorithmic reasoning program synthesis 或 搜索:

用于算法代码优化的受控自进化方法 / Controlled Self-Evolution for Algorithmic Code Optimization


1️⃣ 一句话总结

这篇论文提出了一种名为‘受控自进化’的新方法,通过引入多样化的初始策略、基于反馈的进化操作和分层记忆机制,有效解决了现有代码生成自进化方法效率低、容易陷入局部最优的问题,从而在有限的资源下更快地找到性能更优的算法代码。

源自 arXiv: 2601.07348