菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-09
📄 Abstract - An Empirical Study on Preference Tuning Generalization and Diversity Under Domain Shift

Preference tuning aligns pretrained language models to human judgments of quality, helpfulness, or safety by optimizing over explicit preference signals rather than likelihood alone. Prior work has shown that preference-tuning degrades performance and reduces helpfulness when evaluated outside the training domain. However, the extent to which adaptation strategies mitigate this domain shift remains unexplored. We address this challenge by conducting a comprehensive and systematic study of alignment generalization under domain shift. We compare five popular alignment objectives and various adaptation strategies from source to target, including target-domain supervised fine-tuning and pseudo-labeling, across summarization and question-answering helpfulness tasks. Our findings reveal systematic differences in generalization across alignment objectives under domain shift. We show that adaptation strategies based on pseudo-labeling can substantially reduce domain-shift degradation

顶级标签: llm model training natural language processing
详细标签: preference tuning domain shift alignment generalization pseudo-labeling adaptation strategies 或 搜索:

领域偏移下偏好调优的泛化性与多样性实证研究 / An Empirical Study on Preference Tuning Generalization and Diversity Under Domain Shift


1️⃣ 一句话总结

这篇论文通过系统研究发现,基于人类偏好优化的大语言模型在应用到新领域时性能会下降,但采用伪标签等适应策略可以有效缓解这种领域偏移带来的负面影响。

源自 arXiv: 2601.05882