菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-06
📄 Abstract - One Sample to Rule Them All: Extreme Data Efficiency in RL Scaling

The reasoning ability of large language models (LLMs) can be unleashed with reinforcement learning (RL) (OpenAI, 2024; DeepSeek-AI et al., 2025a; Zeng et al., 2025). The success of existing RL attempts in LLMs usually relies on high-quality samples of thousands or beyond. In this paper, we challenge fundamental assumptions about data requirements in RL for LLMs by demonstrating the remarkable effectiveness of one-shot learning. Specifically, we introduce polymath learning, a framework for designing one training sample that elicits multidisciplinary impact. We present three key findings: (1) A single, strategically selected math reasoning sample can produce significant performance improvements across multiple domains, including physics, chemistry, and biology with RL; (2) The math skills salient to reasoning suggest the characteristics of the optimal polymath sample; and (3) An engineered synthetic sample that integrates multidiscipline elements outperforms training with individual samples that naturally occur. Our approach achieves superior performance to training with larger datasets across various reasoning benchmarks, demonstrating that sample quality and design, rather than quantity, may be the key to unlock enhanced reasoning capabilities in language models. Our results suggest a shift, dubbed as sample engineering, toward precision engineering of training samples rather than simply increasing data volume.

顶级标签: llm reinforcement learning model training
详细标签: one-shot learning sample efficiency reasoning polymath learning sample engineering 或 搜索:

一统天下的样本:强化学习规模化中的极致数据效率 / One Sample to Rule Them All: Extreme Data Efficiency in RL Scaling


1️⃣ 一句话总结

这篇论文挑战了传统观念,发现只需一个精心设计的数学推理样本进行强化学习,就能显著提升大语言模型在物理、化学、生物等多个领域的综合推理能力,证明了样本质量比数量更重要。

源自 arXiv: 2601.03111