从人类反馈中学习个性化智能体 / Learning Personalized Agents from Human Feedback
1️⃣ 一句话总结
这篇论文提出了一个名为PAHF的框架,让AI智能体能够通过与用户的实时互动,持续学习和适应每个用户独特且可能变化的个人偏好,从而提供更贴心的服务。
Modern AI agents are powerful but often fail to align with the idiosyncratic, evolving preferences of individual users. Prior approaches typically rely on static datasets, either training implicit preference models on interaction history or encoding user profiles in external memory. However, these approaches struggle with new users and with preferences that change over time. We introduce Personalized Agents from Human Feedback (PAHF), a framework for continual personalization in which agents learn online from live interaction using explicit per-user memory. PAHF operationalizes a three-step loop: (1) seeking pre-action clarification to resolve ambiguity, (2) grounding actions in preferences retrieved from memory, and (3) integrating post-action feedback to update memory when preferences drift. To evaluate this capability, we develop a four-phase protocol and two benchmarks in embodied manipulation and online shopping. These benchmarks quantify an agent's ability to learn initial preferences from scratch and subsequently adapt to persona shifts. Our theoretical analysis and empirical results show that integrating explicit memory with dual feedback channels is critical: PAHF learns substantially faster and consistently outperforms both no-memory and single-channel baselines, reducing initial personalization error and enabling rapid adaptation to preference shifts.
从人类反馈中学习个性化智能体 / Learning Personalized Agents from Human Feedback
这篇论文提出了一个名为PAHF的框架,让AI智能体能够通过与用户的实时互动,持续学习和适应每个用户独特且可能变化的个人偏好,从而提供更贴心的服务。
源自 arXiv: 2602.16173