菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-10
📄 Abstract - MOA: Multi-Objective Alignment for Role-Playing Agents

Role-playing agents (RPAs) must simultaneously master many conflicting skills -- following multi-turn instructions, exhibiting domain knowledge, and adopting a consistent linguistic style. Existing work either relies on supervised fine-tuning (SFT) that over-fits surface cues and yields low diversity, or applies reinforcement learning (RL) that fails to learn multiple dimensions for comprehensive RPA optimization. We present MOA (Multi-Objective Alignment), a reinforcement-learning framework that enables multi-dimensional, fine-grained rubric optimization for general RPAs. MOA introduces a novel multi-objective optimization strategy that trains simultaneously on multiple fine-grained rubrics to boost optimization performance. Besides, to address the issues of model output diversity and quality, we have also employed thought-augmented rollout with off-policy guidance. Extensive experiments on challenging benchmarks such as PersonaGym and RoleMRC show that MOA enables an 8B model to match or even outperform strong baselines such as GPT-4o and Claude across numerous dimensions. This demonstrates the great potential of MOA in building RPAs that can simultaneously meet the demands of role knowledge, persona style, diverse scenarios, and complex multi-turn conversations.

顶级标签: llm agents reinforcement learning
详细标签: multi-objective alignment role-playing agents reinforcement learning fine-grained rubrics persona consistency 或 搜索:

MOA:面向角色扮演智能体的多目标对齐框架 / MOA: Multi-Objective Alignment for Role-Playing Agents


1️⃣ 一句话总结

这篇论文提出了一个名为MOA的强化学习框架,它通过同时优化多个细粒度的评估标准,有效解决了角色扮演智能体在遵循指令、展现知识、保持语言风格一致性等多方面难以兼顾的难题,使得一个较小的模型在多项任务上能媲美甚至超越GPT-4o等强大基线模型。


源自 arXiv: 2512.09756