📄 论文总结
重新审视跨难度级别的泛化:这并不简单 / Revisiting Generalization Across Difficulty Levels: It's Not So Easy
1️⃣ 一句话总结
这项研究发现,大型语言模型在跨越不同难度任务时的泛化能力有限,无论是用简单还是困难的数据训练,都无法在所有难度级别上取得一致性的提升,强调了训练和评估数据中难度多样性的重要性。
We investigate how well large language models (LLMs) generalize across different task difficulties, a key question for effective data curation and evaluation. Existing research is mixed regarding whether training on easier or harder data leads to better results, and whether those gains come on easier or harder test data. We address this question by conducting a systematic evaluation of LLMs' generalization across models, datasets, and fine-grained groups of example difficulty. We rank examples in six datasets using the outputs of thousands of different LLMs and Item Response Theory (IRT), a well-established difficulty metric in educational testing. Unlike prior work, our difficulty ratings are therefore determined solely by the abilities of many different LLMs, excluding human opinions of difficulty. With a more objective, larger-scale, and finer-grained analysis, we show that cross-difficulty generalization is often limited; training on either easy or hard data cannot achieve consistent improvements across the full range of difficulties. These results show the importance of having a range of difficulties in both training and evaluation data for LLMs, and that taking shortcuts with respect to difficulty is risky.
重新审视跨难度级别的泛化:这并不简单 / Revisiting Generalization Across Difficulty Levels: It's Not So Easy
这项研究发现,大型语言模型在跨越不同难度任务时的泛化能力有限,无论是用简单还是困难的数据训练,都无法在所有难度级别上取得一致性的提升,强调了训练和评估数据中难度多样性的重要性。