菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-22
📄 Abstract - Learning to Discover at Test Time

How can we use AI to discover a new state of the art for a scientific problem? Prior work in test-time scaling, such as AlphaEvolve, performs search by prompting a frozen LLM. We perform reinforcement learning at test time, so the LLM can continue to train, but now with experience specific to the test problem. This form of continual learning is quite special, because its goal is to produce one great solution rather than many good ones on average, and to solve this very problem rather than generalize to other problems. Therefore, our learning objective and search subroutine are designed to prioritize the most promising solutions. We call this method Test-Time Training to Discover (TTT-Discover). Following prior work, we focus on problems with continuous rewards. We report results for every problem we attempted, across mathematics, GPU kernel engineering, algorithm design, and biology. TTT-Discover sets the new state of the art in almost all of them: (i) Erdős' minimum overlap problem and an autocorrelation inequality; (ii) a GPUMode kernel competition (up to $2\times$ faster than prior art); (iii) past AtCoder algorithm competitions; and (iv) denoising problem in single-cell analysis. Our solutions are reviewed by experts or the organizers. All our results are achieved with an open model, OpenAI gpt-oss-120b, and can be reproduced with our publicly available code, in contrast to previous best results that required closed frontier models. Our test-time training runs are performed using Tinker, an API by Thinking Machines, with a cost of only a few hundred dollars per problem.

顶级标签: llm agents model training
详细标签: test-time training reinforcement learning scientific discovery continuous optimization open models 或 搜索:

在测试时学习以进行发现 / Learning to Discover at Test Time


1️⃣ 一句话总结

这篇论文提出了一种名为TTT-Discover的新方法,它让大型语言模型在解决具体科学问题时,能像人类一样在测试阶段继续学习和优化,从而在数学、算法、生物等多个领域自动发现超越现有最佳水平的解决方案。

源自 arXiv: 2601.16175