菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-04
📄 Abstract - PersoDPO: Scalable Preference Optimization for Instruction-Adherent, Persona-Grounded Dialogue via Multi-LLM Evaluation

Personalization and contextual coherence are two essential components in building effective persona-grounded dialogue systems. These aspects play a crucial role in enhancing user engagement and ensuring responses are more relevant and consistent with user identity. However, recent studies indicate that open-source large language models (LLMs) continue to struggle to generate responses that are both contextually grounded and aligned with persona cues, despite exhibiting strong general conversational abilities like fluency and naturalness. We present PersoDPO, a scalable preference optimisation framework that uses supervision signals from automatic evaluations of responses generated by both closed-source and open-source LLMs to fine-tune dialogue models. The framework integrates evaluation metrics targeting coherence and personalization, along with a length-format compliance feature to promote instruction adherence. These signals are combined to automatically construct high-quality preference pairs without manual annotation, enabling a scalable and reproducible training pipeline. Experiments on the FoCus dataset show that an open-source language model fine-tuned with the PersoDPO framework consistently outperforms strong open-source baselines and a standard Direct Preference Optimization (DPO) variant across multiple evaluation dimensions.

顶级标签: llm natural language processing model training
详细标签: preference optimization dialogue systems personalization automatic evaluation dpo 或 搜索:

PersoDPO:通过多LLM评估实现可扩展的、遵循指令且基于人设的对话偏好优化 / PersoDPO: Scalable Preference Optimization for Instruction-Adherent, Persona-Grounded Dialogue via Multi-LLM Evaluation


1️⃣ 一句话总结

这篇论文提出了一个名为PersoDPO的可扩展训练框架,它通过自动评估多个大语言模型的回复来构建高质量的训练数据,从而让开源对话模型学会生成既符合对话背景、又贴合用户个人特点的回复,效果优于现有方法。

源自 arXiv: 2602.04493