基于自然语言反馈改进交互式上下文学习 / Improving Interactive In-Context Learning from Natural Language Feedback
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过将单次任务转化为多轮互动训练,教会大型语言模型像人一样从纠正性反馈中学习,从而显著提升了模型在数学、编程等复杂任务上的表现,甚至让小模型达到接近大模型的效果。
Adapting one's thought process based on corrective feedback is an essential ability in human learning, particularly in collaborative settings. In contrast, the current large language model training paradigm relies heavily on modeling vast, static corpora. While effective for knowledge acquisition, it overlooks the interactive feedback loops essential for models to adapt dynamically to their context. In this work, we propose a framework that treats this interactive in-context learning ability not as an emergent property, but as a distinct, trainable skill. We introduce a scalable method that transforms single-turn verifiable tasks into multi-turn didactic interactions driven by information asymmetry. We first show that current flagship models struggle to integrate corrective feedback on hard reasoning tasks. We then demonstrate that models trained with our approach dramatically improve the ability to interactively learn from language feedback. More specifically, the multi-turn performance of a smaller model nearly reaches that of a model an order of magnitude larger. We also observe robust out-of-distribution generalization: interactive training on math problems transfers to diverse domains like coding, puzzles and maze navigation. Our qualitative analysis suggests that this improvement is due to an enhanced in-context plasticity. Finally, we show that this paradigm offers a unified path to self-improvement. By training the model to predict the teacher's critiques, effectively modeling the feedback environment, we convert this external signal into an internal capability, allowing the model to self-correct even without a teacher.
基于自然语言反馈改进交互式上下文学习 / Improving Interactive In-Context Learning from Natural Language Feedback
这篇论文提出了一种新方法,通过将单次任务转化为多轮互动训练,教会大型语言模型像人一样从纠正性反馈中学习,从而显著提升了模型在数学、编程等复杂任务上的表现,甚至让小模型达到接近大模型的效果。
源自 arXiv: 2602.16066