一适应万:用于个性化大语言模型对齐的元奖励建模 / One Adapts to Any: Meta Reward Modeling for Personalized LLM Alignment
1️⃣ 一句话总结
这篇论文提出了一种名为‘元奖励建模’的新方法,通过元学习让奖励模型能够仅用少量用户反馈就快速学习并适应新用户的个性化偏好,从而更高效地实现大语言模型的个性化对齐。
Alignment of Large Language Models (LLMs) aims to align outputs with human preferences, and personalized alignment further adapts models to individual users. This relies on personalized reward models that capture user-specific preferences and automatically provide individualized feedback. However, developing these models faces two critical challenges: the scarcity of feedback from individual users and the need for efficient adaptation to unseen users. We argue that addressing these constraints requires a paradigm shift from fitting data to learn user preferences to learn the process of preference adaptation. To realize this, we propose Meta Reward Modeling (MRM), which reformulates personalized reward modeling as a meta-learning problem. Specifically, we represent each user's reward model as a weighted combination of base reward functions, and optimize the initialization of these weights using a Model-Agnostic Meta-Learning (MAML)-style framework to support fast adaptation under limited feedback. To ensure robustness, we introduce the Robust Personalization Objective (RPO), which places greater emphasis on hard-to-learn users during meta optimization. Extensive experiments on personalized preference datasets validate that MRM enhances few-shot personalization, improves user robustness, and consistently outperforms baselines.
一适应万:用于个性化大语言模型对齐的元奖励建模 / One Adapts to Any: Meta Reward Modeling for Personalized LLM Alignment
这篇论文提出了一种名为‘元奖励建模’的新方法,通过元学习让奖励模型能够仅用少量用户反馈就快速学习并适应新用户的个性化偏好,从而更高效地实现大语言模型的个性化对齐。
源自 arXiv: 2601.18731