基于最少人工监督的引导式大语言模型自我演化 / Guided Self-Evolving LLMs with Minimal Human Supervision
1️⃣ 一句话总结
这篇论文提出了一个名为R-Few的引导式自我对抗学习框架,通过少量人工标注示例和基于难度的课程训练,使大语言模型能够稳定、可控地自我进化,在数学和通用推理任务上取得了显著性能提升,同时有效避免了模型在无引导自我进化中常见的性能停滞或退化问题。
AI self-evolution has long been envisioned as a path toward superintelligence, where models autonomously acquire, refine, and internalize knowledge from their own learning experiences. Yet in practice, unguided self-evolving systems often plateau quickly or even degrade as training progresses. These failures arise from issues such as concept drift, diversity collapse, and mis-evolution, as models reinforce their own biases and converge toward low-entropy behaviors. To enable models to self-evolve in a stable and controllable manner while minimizing reliance on human supervision, we introduce R-Few, a guided Self-Play Challenger-Solver framework that incorporates lightweight human oversight through in-context grounding and mixed training. At each iteration, the Challenger samples a small set of human-labeled examples to guide synthetic question generation, while the Solver jointly trains on human and synthetic examples under an online, difficulty-based curriculum. Across math and general reasoning benchmarks, R-Few achieves consistent and iterative improvements. For example, Qwen3-8B-Base improves by +3.0 points over R-Zero on math tasks and achieves performance on par with General-Reasoner, despite the latter being trained on 20 times more human data. Ablation studies confirm the complementary contributions of grounded challenger training and curriculum-based solver training, and further analysis shows that R-Few mitigates drift, yielding more stable and controllable co-evolutionary dynamics.
基于最少人工监督的引导式大语言模型自我演化 / Guided Self-Evolving LLMs with Minimal Human Supervision
这篇论文提出了一个名为R-Few的引导式自我对抗学习框架,通过少量人工标注示例和基于难度的课程训练,使大语言模型能够稳定、可控地自我进化,在数学和通用推理任务上取得了显著性能提升,同时有效避免了模型在无引导自我进化中常见的性能停滞或退化问题。