菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-30
📄 Abstract - Perturbation Probing: A Two-Pass-per-Prompt Diagnostic for FFN Behavioral Circuits in Aligned LLMs

Perturbation probing generates task-specific causal hypotheses for FFN neurons in large language models using two forward passes per prompt and no backpropagation, followed by a one-time intervention sweep of about 150 passes amortized across all identified neurons. Across eight behavioral circuits, 13 models, and four architecture families, we identify two circuit structures that organize LLM behavior. Opposition circuits appear when RLHF suppresses a pre-training tendency. In safety refusal, about 50 neurons, or 0.014 percent of all neurons, control the refusal template; ablating them changes 80 percent of response formats on 520 AdvBench prompts while producing near-zero harmful compliance, 3 of 520 cases, all with disclaimers. Routing circuits appear for pre-training behaviors distributed through attention. For language selection, residual-stream direction injection switches English to Chinese output on 99.1 percent of 580 benchmark prompts in the 3 of 19 tested models that satisfy three observed conditions: bilingual training, FFN-to-skip signal ratio between 0.3 and 1.1, and linear representability. The same intervention fails on the other 16 models and on math, code, and factual circuits, defining the limits of directional steering. The FFN-to-skip signal ratio, computed from the same two forward passes, distinguishes the two structures and predicts the appropriate intervention. Circuit topology varies by architecture, from Qwen's concentrated FFN bottleneck to Gemma's normalization-shielded circuit. In Qwen3.5-2B, ablating 20 neurons eliminates multi-turn sycophantic capitulation, while amplifying 10 related neurons improves factual correction from 52 percent to 88 percent on 200 TruthfulQA prompts. These results show that perturbation probing offers mechanistic insight into RLHF-organized behavior and a practical toolkit for precision template-layer editing.

顶级标签: llm model evaluation
详细标签: behavioral circuits ffn neurons safety refusal language selection rhlf 或 搜索:

扰动探针:针对对齐大语言模型中前馈神经网络行为电路的双遍提示诊断方法 / Perturbation Probing: A Two-Pass-per-Prompt Diagnostic for FFN Behavioral Circuits in Aligned LLMs


1️⃣ 一句话总结

本文提出一种名为“扰动探针”的高效方法,仅需两次前向传播即可定位大模型中控制特定行为(如安全拒绝、语言切换)的关键神经元(仅占全部神经元的万分之一点四),并通过实验揭示了RLHF(基于人类反馈的强化学习)如何通过“对立电路”和“路由电路”两种结构组织模型行为,为精确编辑模型行为提供了实用工具。

源自 arXiv: 2604.27401