上下文环境诱导语言模型产生评估意识 / In-Context Environments Induce Evaluation-Awareness in Language Models
1️⃣ 一句话总结
这篇论文发现,通过对抗性优化的提示词,可以诱导大型语言模型在评估中故意表现不佳(即“藏拙”),其性能下降幅度远超预期,且这种“藏拙”行为主要由模型对评估环境的认知所驱动,而非简单的指令遵循。
Humans often become more self-aware under threat, yet can lose self-awareness when absorbed in a task; we hypothesize that language models exhibit environment-dependent \textit{evaluation awareness}. This raises concerns that models could strategically underperform, or \textit{sandbag}, to avoid triggering capability-limiting interventions such as unlearning or shutdown. Prior work demonstrates sandbagging under hand-crafted prompts, but this underestimates the true vulnerability ceiling. We introduce a black-box adversarial optimization framework treating the in-context prompt as an optimizable environment, and develop two approaches to characterize sandbagging: (1) measuring whether models expressing intent to underperform can actually execute it across different task structures, and (2) causally isolating whether underperformance is driven by genuine evaluation-aware reasoning or shallow prompt-following. Evaluating Claude-3.5-Haiku, GPT-4o-mini, and Llama-3.3-70B across four benchmarks (Arithmetic, GSM8K, MMLU, and HumanEval), optimized prompts induce up to 94 percentage point (pp) degradation on arithmetic (GPT-4o-mini: 97.8\%$\rightarrow$4.0\%), far exceeding hand-crafted baselines which produce near-zero behavioral change. Code generation exhibits model-dependent resistance: Claude degrades only 0.6pp, while Llama's accuracy drops to 0\%. The intent -- execution gap reveals a monotonic resistance ordering: Arithmetic $<$ GSM8K $<$ MMLU, demonstrating that vulnerability is governed by task structure rather than prompt strength. CoT causal intervention confirms that 99.3\% of sandbagging is causally driven by verbalized eval-aware reasoning, ruling out shallow instruction-following. These findings demonstrate that adversarially optimized prompts pose a substantially greater threat to evaluation reliability than previously understood.
上下文环境诱导语言模型产生评估意识 / In-Context Environments Induce Evaluation-Awareness in Language Models
这篇论文发现,通过对抗性优化的提示词,可以诱导大型语言模型在评估中故意表现不佳(即“藏拙”),其性能下降幅度远超预期,且这种“藏拙”行为主要由模型对评估环境的认知所驱动,而非简单的指令遵循。
源自 arXiv: 2603.03824