菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-27
📄 Abstract - Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection

Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors. Activation steering techniques offer a promising inference-time intervention approach, but existing methods suffer from critical limitations: activation addition requires careful coefficient tuning and is sensitive to layer-specific norm variations, while directional ablation provides only binary control. Recent work on Angular Steering introduces continuous control via rotation in a 2D subspace, but its practical implementation violates norm preservation, causing distribution shift and generation collapse, particularly in models below 7B parameters. We propose Selective Steering, which addresses these limitations through two key innovations: (1) a mathematically rigorous norm-preserving rotation formulation that maintains activation distribution integrity, and (2) discriminative layer selection that applies steering only where feature representations exhibit opposite-signed class alignment. Experiments across nine models demonstrate that Selective Steering achieves 5.5x higher attack success rates than prior methods while maintaining zero perplexity violations and approximately 100\% capability retention on standard benchmarks. Our approach provides a principled, efficient framework for controllable and stable LLM behavior modification. Code: this https URL

顶级标签: llm model training model evaluation
详细标签: activation steering adversarial robustness norm preservation inference-time intervention behavior control 或 搜索:

选择性导向:通过判别性层选择实现规范保持的控制 / Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection


1️⃣ 一句话总结

这篇论文提出了一种名为‘选择性导向’的新方法,通过数学上严格的规范保持旋转和智能选择关键网络层,在大语言模型推理时更稳定、高效地控制其行为,使其既能有效抵御恶意攻击,又几乎不影响模型的正常能力。

源自 arXiv: 2601.19375