菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-03
📄 Abstract - Model Spec Midtraining: Improving How Alignment Training Generalizes

Some frontier AI developers aim to align language models to a Model Spec or Constitution that describes the intended model behavior. However, standard alignment fine-tuning -- training on demonstrations of spec-aligned behavior -- can produce shallow alignment that generalizes poorly, in part because demonstration data can underspecify the desired generalization. We introduce model spec midtraining (MSM): after pre-training but before alignment fine-tuning, we train models on synthetic documents discussing their Model Spec. This teaches models the content of the spec, thereby shaping how they generalize from subsequent demonstration data. For example, a model fine-tuned only to express certain cheese preferences, such as "I prefer cream cheese over brie", generalizes to broadly pro-America values when we apply MSM with a spec attributing those preferences to pro-America values. Conversely, a spec about pro-affordability values instead yields pro-affordability generalization from the exact same cheese fine-tuning. MSM can also shape complex safety-relevant propensities: applying MSM with a spec addressing self-preservation and goal-guarding substantially reduces agentic misalignment rate (Qwen3-32B: 54% to 7%), beating a deliberative alignment baseline (14%). We further use MSM as a tool to study which Model Specs produce the strongest alignment generalization, finding that explaining the values underlying rules improves generalization, as does providing specific rather than general guidance. Overall, MSM is a simple, effective technique for controlling and improving how models generalize from alignment training by first teaching them the intended generalization.

顶级标签: llm model training
详细标签: alignment generalization model spec fine-tuning safety 或 搜索:

模型规范中间训练:提升对齐训练泛化能力的方法 / Model Spec Midtraining: Improving How Alignment Training Generalizes


1️⃣ 一句话总结

本文提出一种名为“模型规范中间训练”的方法,在预训练与对齐微调之间,先让模型学习一份明确的行为规范文档,从而引导模型从后续的示范数据中更准确地泛化出符合规范的行为,显著降低模型出现危险或失控行为的概率。

源自 arXiv: 2605.02087