菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-08
📄 Abstract - Selective Neuron Amplification for Training-Free Task Enhancement

Large language models often fail on tasks they seem to already understand. In our experiments, this appears to be less about missing knowledge and more about certain internal circuits not being strongly activated during inference. We explore Selective Neuron Amplification, which increases the influence of task relevant neurons without changing the model's parameters. The method works at inference time and does not permanently alter the model. SNA helps mainly when the model is uncertain, while having low effect when the model is already confident. This suggests that some model failures are due to weak activation rather than lack of capability.

顶级标签: llm model evaluation theory
详细标签: inference-time intervention neuron activation task enhancement training-free model capabilities 或 搜索:

选择性神经元放大:无需训练的任务增强方法 / Selective Neuron Amplification for Training-Free Task Enhancement


1️⃣ 一句话总结

这篇论文提出了一种无需修改模型参数、在推理时增强特定任务相关神经元激活强度的方法,以解决大语言模型因内部关键回路激活不足而非知识缺失导致的失败问题。

源自 arXiv: 2604.07098