菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-27
📄 Abstract - Identifying and Transferring Reasoning-Critical Neurons: Improving LLM Inference Reliability via Activation Steering

Despite the strong reasoning capabilities of recent large language models (LLMs), achieving reliable performance on challenging tasks often requires post-training or computationally expensive sampling strategies, limiting their practical efficiency. In this work, we first show that a small subset of neurons in LLMs exhibits strong predictive correlations with reasoning correctness. Based on this observation, we propose AdaRAS (Adaptive Reasoning Activation Steering), a lightweight test-time framework that improves reasoning reliability by selectively intervening on neuron activations. AdaRAS identifies Reasoning-Critical Neurons (RCNs) via a polarity-aware mean-difference criterion and adaptively steers their activations during inference, enhancing incorrect reasoning traces while avoiding degradation on already-correct cases. Experiments on 10 mathematics and coding benchmarks demonstrate consistent improvements, including over 13% gains on AIME-24 and AIME-25. Moreover, AdaRAS exhibits strong transferability across datasets and scalability to stronger models, outperforming post-training methods without additional training or sampling cost.

顶级标签: llm model evaluation theory
详细标签: activation steering reasoning reliability neuron interpretability test-time intervention model efficiency 或 搜索:

识别与迁移推理关键神经元:通过激活引导提升大语言模型推理可靠性 / Identifying and Transferring Reasoning-Critical Neurons: Improving LLM Inference Reliability via Activation Steering


1️⃣ 一句话总结

这篇论文提出了一种名为AdaRAS的轻量级方法,通过识别并微调大语言模型中少数与推理正确性高度相关的‘关键神经元’的激活状态,就能在无需额外训练或复杂采样的前提下,显著提升模型在数学和编程等复杂任务上的推理准确率。

源自 arXiv: 2601.19847