SR-GRPO:将稳定秩作为大语言模型对齐的内在几何奖励 / SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model Alignment
1️⃣ 一句话总结
这篇论文提出了一种名为‘稳定秩’的新方法,它通过分析模型内部表示的空间结构来自动评估输出质量,并以此作为奖励信号来优化大语言模型,无需依赖人工标注或外部奖励模型,就能有效提升模型在数学推理等任务上的表现。
Aligning Large Language Models (LLMs) with human preferences typically relies on external supervision, which faces critical limitations: human annotations are scarce and subjective, reward models are vulnerable to reward hacking, and self-evaluation methods suffer from prompt sensitivity and biases. In this work, we propose stable rank, an intrinsic, annotation-free quality signal derived from model representations. Stable rank measures the effective dimensionality of hidden states by computing the ratio of total variance to dominant-direction variance, capturing quality through how information distributes across representation dimensions. Empirically, stable rank achieves 84.04% accuracy on RewardBench and improves task accuracy by an average of 11.3 percentage points over greedy decoding via Best-of-N sampling. Leveraging this insight, we introduce Stable Rank Group Relative Policy Optimization (SR-GRPO), which uses stable rank as a reward signal for reinforcement learning. Without external supervision, SR-GRPO improves Qwen2.5-1.5B-Instruct by 10% on STEM and 19% on mathematical reasoning, outperforming both learned reward models and self-evaluation baselines. Our findings demonstrate that quality signals can be extracted from internal model geometry, offering a path toward scalable alignment without external supervision.
SR-GRPO:将稳定秩作为大语言模型对齐的内在几何奖励 / SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model Alignment
这篇论文提出了一种名为‘稳定秩’的新方法,它通过分析模型内部表示的空间结构来自动评估输出质量,并以此作为奖励信号来优化大语言模型,无需依赖人工标注或外部奖励模型,就能有效提升模型在数学推理等任务上的表现。