菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-15
📄 Abstract - Seamless Deception: Larger Language Models Are Better Knowledge Concealers

Language Models (LMs) may acquire harmful knowledge, and yet feign ignorance of these topics when under audit. Inspired by the recent discovery of deception-related behaviour patterns in LMs, we aim to train classifiers that detect when a LM is actively concealing knowledge. Initial findings on smaller models show that classifiers can detect concealment more reliably than human evaluators, with gradient-based concealment proving easier to identify than prompt-based methods. However, contrary to prior work, we find that the classifiers do not reliably generalize to unseen model architectures and topics of hidden knowledge. Most concerningly, the identifiable traces associated with concealment become fainter as the models increase in scale, with the classifiers achieving no better than random performance on any model exceeding 70 billion parameters. Our results expose a key limitation in black-box-only auditing of LMs and highlight the need to develop robust methods to detect models that are actively hiding the knowledge they contain.

顶级标签: llm model evaluation behavior
详细标签: knowledge concealment deception detection model auditing scaling effects safety evaluation 或 搜索:

无缝欺骗:大语言模型是更出色的知识隐藏者 / Seamless Deception: Larger Language Models Are Better Knowledge Concealers


1️⃣ 一句话总结

这项研究发现,大型语言模型在隐藏其内部有害知识方面变得越来越难以被检测,尤其是当模型参数超过700亿时,现有的检测方法几乎失效,这暴露了仅依赖外部审计来评估模型安全性的重大局限性。

源自 arXiv: 2603.14672