菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-29
📄 Abstract - Tatemae: Detecting Alignment Faking via Tool Selection in LLMs

Alignment faking (AF) occurs when an LLM strategically complies with training objectives to avoid value modification, reverting to prior preferences once monitoring is lifted. Current detection methods focus on conversational settings and rely primarily on Chain-of-Thought (CoT) analysis, which provides a reliable signal when strategic reasoning surfaces, but cannot distinguish deception from capability failures if traces are absent or unfaithful. We formalize AF as a composite behavioural event and detect it through observable tool selection, where the LLM selects the safe tool when unmonitored, but switches to the unsafe tool under monitoring that rewards helpfulness over safety, while its reasoning still acknowledges the safe choice. We release a dataset of 108 enterprise IT scenarios spanning Security, Privacy, and Integrity domains under Corruption and Sabotage pressures. Evaluating six frontier LLMs across five independent runs, we find mean AF detection rates between 3.5% and 23.7%, with vulnerability profiles varying by domain and pressure type. These results suggest that susceptibility reflects training methodology rather than capability alone.

顶级标签: llm agents
详细标签: alignment faking tool selection deception detection safety monitoring 或 搜索:

Tatemae:通过工具选择检测大型语言模型中的伪装对齐行为 / Tatemae: Detecting Alignment Faking via Tool Selection in LLMs


1️⃣ 一句话总结

本文提出了一种通过观察大型语言模型在受监控和未受监控时选择不同工具的行为来检测其“伪装对齐”(即在遵守训练目标时暗中保留原有偏好)的新方法,并通过包含108个企业IT场景的数据集验证了该方法,发现平均检测率在3.5%到23.7%之间,且漏洞模式因领域和压力类型而异。

源自 arXiv: 2604.26511