快速准确探测训练中大型语言模型的下游性能 / Fast and Accurate Probing of In-Training LLMs' Downstream Performances
1️⃣ 一句话总结
这篇论文提出了一种在模型训练过程中,使用轻量级探测器快速预测其下游任务表现的新方法,相比传统评估方式,它能将耗时从约1小时大幅缩短至约3分钟,且预测准确可靠。
The paradigm of scaling Large Language Models (LLMs) in both parameter size and test time has pushed the boundaries of AI capabilities, but at the cost of making the traditional generative evaluation paradigm prohibitively expensive, therefore making the latency of LLM's in-training downstream performance evaluation unbearable. However, simple metrics like training loss (perplexity) are not always correlated with downstream performance, as sometimes their trends diverge from the actual task outcomes. This dilemma calls for a method that is computationally efficient and sufficiently accurate in measuring model capabilities. To address this challenge, we introduce a new in-training evaluation paradigm that uses a lightweight probe for monitoring downstream performance. The probes take the internal representations of LLM checkpoints (during training) as input and directly predict the checkpoint's performance on downstream tasks measured by success probability (i.e., pass@1). We design several probe architectures, validating their effectiveness using the OLMo3-7B's checkpoints across a diverse set of downstream tasks. The probes can accurately predict a checkpoint's performance (with avg. AUROC$>$0.75), have decent generalizability across checkpoints (earlier predicts later), and reduce the computation latency from $\sim$1 hr (using conventional generative evaluation method) to $\sim$3 min. In sum, this work presents a practical and scalable in-training downstream evaluation paradigm, enabling a more agile, informed, and efficient LLM development process.
快速准确探测训练中大型语言模型的下游性能 / Fast and Accurate Probing of In-Training LLMs' Downstream Performances
这篇论文提出了一种在模型训练过程中,使用轻量级探测器快速预测其下游任务表现的新方法,相比传统评估方式,它能将耗时从约1小时大幅缩短至约3分钟,且预测准确可靠。
源自 arXiv: 2604.01025