为大型语言模型引导推理选择可靠的控制点 / Reliable Control-Point Selection for Steering Reasoning in Large Language Models
1️⃣ 一句话总结
这篇论文发现,当前通过关键词匹配来引导大语言模型推理的方法大多不可靠,并提出了一种基于稳定性筛选的新方法,能有效识别并利用模型内部真正稳定的行为信号来提升数学推理等任务的性能,且该方法能在同架构模型间迁移。
Steering vectors offer a training-free mechanism for controlling reasoning behaviors in large language models, but constructing effective vectors requires identifying genuine behavioral signals in the model's hidden states. For behaviors that can be toggled via prompts, this is straightforward. However, many reasoning behaviors -- such as self-reflection -- emerge spontaneously and resist prompt-level control. Current methods detect these behaviors through keyword matching in chain-of-thought traces, implicitly assuming that every detected boundary encodes a genuine behavioral signal. We show that this assumption is overwhelmingly wrong: across 541 keyword-detected boundaries, 93.3\% are behaviorally unstable, failing to reproduce the detected behavior under re-generation from the same prefix. We develop a probabilistic model that formalizes intrinsic reasoning behaviors as stochastic events with context-dependent trigger probabilities, and show that unstable boundaries dilute the steering signal. Guided by this analysis, we propose stability filtering, which retains only boundaries where the model consistently reproduces the target behavior. Combined with a content-subspace projection that removes residual question-specific noise, our method achieves 0.784 accuracy on MATH-500 (+5.0 over the strongest baseline). The resulting steering vectors transfer across models in the same architecture family without re-extraction, improving Nemotron-Research-Reasoning-1.5B (+5.0) and DeepScaleR-1.5B-Preview (+6.0). Code is available at this https URL.
为大型语言模型引导推理选择可靠的控制点 / Reliable Control-Point Selection for Steering Reasoning in Large Language Models
这篇论文发现,当前通过关键词匹配来引导大语言模型推理的方法大多不可靠,并提出了一种基于稳定性筛选的新方法,能有效识别并利用模型内部真正稳定的行为信号来提升数学推理等任务的性能,且该方法能在同架构模型间迁移。
源自 arXiv: 2604.02113