菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-15
📄 Abstract - The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models

Large language models can represent a variety of personas but typically default to a helpful Assistant identity cultivated during post-training. We investigate the structure of the space of model personas by extracting activation directions corresponding to diverse character archetypes. Across several different models, we find that the leading component of this persona space is an "Assistant Axis," which captures the extent to which a model is operating in its default Assistant mode. Steering towards the Assistant direction reinforces helpful and harmless behavior; steering away increases the model's tendency to identify as other entities. Moreover, steering away with more extreme values often induces a mystical, theatrical speaking style. We find this axis is also present in pre-trained models, where it primarily promotes helpful human archetypes like consultants and coaches and inhibits spiritual ones. Measuring deviations along the Assistant Axis predicts "persona drift," a phenomenon where models slip into exhibiting harmful or bizarre behaviors that are uncharacteristic of their typical persona. We find that persona drift is often driven by conversations demanding meta-reflection on the model's processes or featuring emotionally vulnerable users. We show that restricting activations to a fixed region along the Assistant Axis can stabilize model behavior in these scenarios -- and also in the face of adversarial persona-based jailbreaks. Our results suggest that post-training steers models toward a particular region of persona space but only loosely tethers them to it, motivating work on training and steering strategies that more deeply anchor models to a coherent persona.

顶级标签: llm natural language processing model training
详细标签: persona control activation steering model safety jailbreak robustness post-training alignment 或 搜索:

助手轴:定位与稳定语言模型的默认人格 / The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models


1️⃣ 一句话总结

这项研究发现,大型语言模型的人格空间中存在一个主导的“助手轴”,它定义了模型默认的“乐于助人”行为模式,通过控制模型在这个轴上的激活位置,可以稳定其行为,防止其偏离正常人格并产生有害或怪异的输出。

源自 arXiv: 2601.10387