菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-19
📄 Abstract - Fail-Closed Alignment for Large Language Models

We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches encode refusal behaviors across multiple latent features, suppressing a single dominant feature$-$via prompt-based jailbreaks$-$can cause alignment to collapse, leading to unsafe generation. Motivated by this, we propose fail-closed alignment as a design principle for robust LLM safety: refusal mechanisms should remain effective even under partial failures via redundant, independent causal pathways. We present a concrete instantiation of this principle: a progressive alignment framework that iteratively identifies and ablates previously learned refusal directions, forcing the model to reconstruct safety along new, independent subspaces. Across four jailbreak attacks, we achieve the strongest overall robustness while mitigating over-refusal and preserving generation quality, with small computational overhead. Our mechanistic analyses confirm that models trained with our method encode multiple, causally independent refusal directions that prompt-based jailbreaks cannot suppress simultaneously, providing empirical support for fail-closed alignment as a principled foundation for robust LLM safety.

顶级标签: llm model training systems
详细标签: alignment safety jailbreak robustness refusal mechanisms fail-closed design 或 搜索:

大语言模型的故障安全对齐 / Fail-Closed Alignment for Large Language Models


1️⃣ 一句话总结

这篇论文发现当前大语言模型的安全对齐机制存在‘故障开放’的弱点,容易被特定攻击绕过,因此提出了一种‘故障安全’的对齐新框架,通过让模型学习多条独立的安全路径来确保即使部分路径失效,模型依然能拒绝生成有害内容,从而显著提升了安全性。

源自 arXiv: 2602.16977