菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-29
📄 Abstract - Training AI Co-Scientists Using Rubric Rewards

AI co-scientists are emerging as a tool to assist human researchers in achieving their research goals. A crucial feature of these AI co-scientists is the ability to generate a research plan given a set of aims and constraints. The plan may be used by researchers for brainstorming, or may even be implemented after further refinement. However, language models currently struggle to generate research plans that follow all constraints and implicit requirements. In this work, we study how to leverage the vast corpus of existing research papers to train language models that generate better research plans. We build a scalable, diverse training corpus by automatically extracting research goals and goal-specific grading rubrics from papers across several domains. We then train models for research plan generation via reinforcement learning with self-grading. A frozen copy of the initial policy acts as the grader during training, with the rubrics creating a generator-verifier gap that enables improvements without external human supervision. To validate this approach, we conduct a study with human experts for machine learning research goals, spanning 225 hours. The experts prefer plans generated by our finetuned Qwen3-30B-A3B model over the initial model for 70% of research goals, and approve 84% of the automatically extracted goal-specific grading rubrics. To assess generality, we also extend our approach to research goals from medical papers, and new arXiv preprints, evaluating with a jury of frontier models. Our finetuning yields 12-22% relative improvements and significant cross-domain generalization, proving effective even in problem settings like medical research where execution feedback is infeasible. Together, these findings demonstrate the potential of a scalable, automated training recipe as a step towards improving general AI co-scientists.

顶级标签: llm model training agents
详细标签: research planning reinforcement learning rubric-based reward self-grading domain generalization 或 搜索:

使用评分标准奖励训练AI科研助手 / Training AI Co-Scientists Using Rubric Rewards


1️⃣ 一句话总结

这篇论文提出了一种利用现有论文自动提取研究目标和评分标准,然后通过强化学习自我评分来训练AI模型,使其能生成更符合要求的研究计划,从而提升AI科研助手的实用性。

源自 arXiv: 2512.23707