菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-11
📄 Abstract - X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests

Competitive programming presents great challenges for Code LLMs due to its intensive reasoning demands and high logical complexity. However, current Code LLMs still rely heavily on real-world data, which limits their scalability. In this paper, we explore a fully synthetic approach: training Code LLMs with entirely generated tasks, solutions, and test cases, to empower code reasoning models without relying on real-world data. To support this, we leverage feature-based synthesis to propose a novel data synthesis pipeline called SynthSmith. SynthSmith shows strong potential in producing diverse and challenging tasks, along with verified solutions and tests, supporting both supervised fine-tuning and reinforcement learning. Based on the proposed synthetic SFT and RL datasets, we introduce the X-Coder model series, which achieves a notable pass rate of 62.9 avg@8 on LiveCodeBench v5 and 55.8 on v6, outperforming DeepCoder-14B-Preview and AReal-boba2-14B despite having only 7B parameters. In-depth analysis reveals that scaling laws hold on our synthetic dataset, and we explore which dimensions are more effective to scale. We further provide insights into code-centric reinforcement learning and highlight the key factors that shape performance through detailed ablations and analysis. Our findings demonstrate that scaling high-quality synthetic data and adopting staged training can greatly advance code reasoning, while mitigating reliance on real-world coding data.

顶级标签: llm model training systems
详细标签: code generation synthetic data competitive programming reinforcement learning scaling laws 或 搜索:

X-Coder:通过全合成任务、解决方案与测试推进竞争性编程 / X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests


1️⃣ 一句话总结

这篇论文提出了一种完全使用AI生成的任务、代码和测试数据来训练代码大模型的新方法,无需依赖真实编程数据,就能让模型在复杂的竞争性编程挑战中表现出色。

源自 arXiv: 2601.06953