菜单

🤖 系统
📄 Abstract - SkillFactory: Self-Distillation For Learning Cognitive Behaviors

Reasoning models leveraging long chains of thought employ various cognitive skills, such as verification of their answers, backtracking, retrying by an alternate method, and more. Previous work has shown that when a base language model exhibits these skills, training that model further with reinforcement learning (RL) can learn to leverage them. How can we get models to leverage skills that aren't exhibited by base models? Our work, SkillFactory, is a method for fine-tuning models to roughly learn these skills during a supervised fine-tuning (SFT) stage prior to RL. Our approach does not rely on distillation from a stronger model, but instead uses samples from the model itself, rearranged to provide training data in the format of those skills. These "silver" SFT traces may be imperfect, but are nevertheless effective for priming a model to acquire skills during RL. Our evaluation shows that (1) starting from SkillFactory SFT initialization helps a model to generalize to harder variants of a task post-RL, despite lower performance pre-RL; (2) cognitive skills are indeed used by the model; (3) RLed SkillFactory models are more robust to regression on out-of-domain tasks than RLed base models. Our work suggests that inductive biases learned prior to RL help models learn robust cognitive skill use.

顶级标签: llm model training agents
详细标签: self-distillation cognitive skills reinforcement learning supervised fine-tuning reasoning 或 搜索:

SkillFactory:用于学习认知行为的自蒸馏方法 / SkillFactory: Self-Distillation For Learning Cognitive Behaviors


1️⃣ 一句话总结

这篇论文提出了一种名为SkillFactory的自蒸馏方法,它通过重新组织模型自身生成的样本进行监督微调,使模型在强化学习前初步掌握验证、回溯等认知技能,从而在后续强化学习中更稳健地运用这些技能并提升在困难任务上的泛化能力。


📄 打开原文 PDF