超越词元级监督:通过强化学习解锁基于解码的回归潜力 / Beyond Token-level Supervision: Unlocking the Potential of Decoding-based Regression via Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过强化学习让大语言模型在预测数值时更准确,解决了传统方法因只关注单个词元而忽略整体数值大小导致精度不足的问题。
Decoding-based regression, which reformulates regression as a sequence generation task, has emerged as a promising paradigm of applying large language models for numerical prediction. However, its progress is hindered by the misalignment between discrete token-level objectives (e.g., cross-entropy) and continuous numerical values. Existing approaches relying on token-level constraints often fail to capture the global magnitude of the target value, limiting their precision and generalization. In this paper, we propose to unlock the potential of decoding-based regression via Reinforcement Learning (RL). We formulate the generation process as a Markov Decision Process, utilizing sequence-level rewards to enforce global numerical coherence. Extensive experiments on tabular regression and code metric regression demonstrate that our method (specifically with ReMax and GRPO) consistently outperforms both state-of-the-art token-level baselines and traditional regression heads, showing the superiority of introducing sequence-level signals. Our analysis further reveals that RL significantly enhances sampling efficiency and predictive precision, establishing decoding-based regression as a robust and accurate paradigm for general-purpose numerical prediction.
超越词元级监督:通过强化学习解锁基于解码的回归潜力 / Beyond Token-level Supervision: Unlocking the Potential of Decoding-based Regression via Reinforcement Learning
这篇论文提出了一种新方法,通过强化学习让大语言模型在预测数值时更准确,解决了传统方法因只关注单个词元而忽略整体数值大小导致精度不足的问题。
源自 arXiv: 2512.06533