菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-28
📄 Abstract - Positive-Unlabeled Reinforcement Learning Distillation for On-Premise Small Models

Due to constraints on privacy, cost, and latency, on-premise deployment of small models is increasingly common. However, most practical pipelines stop at supervised fine-tuning (SFT) and fail to reach the reinforcement learning (RL) alignment stage. The main reason is that RL alignment typically requires either expensive human preference annotation or heavy reliance on high-quality reward models with large-scale API usage and ongoing engineering maintenance, both of which are ill-suited to on-premise settings. To bridge this gap, we propose a positive-unlabeled (PU) RL distillation method for on-premise small-model deployment. Without human-labeled preferences or a reward model, our method distills the teacher's preference-optimization capability from black-box generations into a locally trainable student. For each prompt, we query the teacher once to obtain an anchor response, locally sample multiple student candidates, and perform anchor-conditioned self-ranking to induce pairwise or listwise preferences, enabling a fully local training loop via direct preference optimization or group relative policy optimization. Theoretical analysis justifies that the induced preference signal by our method is order-consistent and concentrates on near-optimal candidates, supporting its stability for preference optimization. Experiments demonstrate that our method achieves consistently strong performance under a low-cost setting.

顶级标签: model training reinforcement learning llm
详细标签: preference optimization knowledge distillation on-premise deployment positive-unlabeled learning direct preference optimization 或 搜索:

面向本地部署小模型的正例-无标记强化学习蒸馏 / Positive-Unlabeled Reinforcement Learning Distillation for On-Premise Small Models


1️⃣ 一句话总结

本文提出了一种无需人工标注偏好或奖励模型的新方法,通过从大模型(教师模型)的黑箱生成中蒸馏偏好优化能力,使本地部署的小模型也能低成本地实现强化学习对齐,从而提升性能。

源自 arXiv: 2601.20687