DZ-TDPO:用于长对话中可变状态追踪的非破坏性时间对齐方法 / DZ-TDPO: Non-Destructive Temporal Alignment for Mutable State Tracking in Long-Context Dialogue
1️⃣ 一句话总结
这篇论文提出了一种名为DZ-TDPO的新方法,它通过智能调整模型对过去对话的关注方式,让AI助手在长对话中能更灵活地跟踪用户意图的变化,而不会破坏模型原有的通用能力。
Long-context dialogue systems suffer from State Inertia, where static constraints prevent models from resolving conflicts between evolving user intents and established historical context. To address this, we propose DZ-TDPO, a non-destructive alignment framework that synergizes conflict-aware dynamic KL constraints with a calibrated temporal attention bias. Experiments on the Multi-Session Chat (MSC) dataset demonstrate that DZ-TDPO achieves state-of-the-art win rates (55.4% on Phi-3.5) while maintaining robust zero-shot generalization. Our scaling analysis reveals a "Capacity-Stability Trade-off": while smaller models incur an "alignment tax" (perplexity surge) to overcome historical inertia, the larger Qwen2.5-7B model achieves 50.8% win rate with negligible perplexity overhead. This confirms that TAI can be alleviated via precise attention regulation rather than destructive weight updates, preserving general capabilities (MMLU) across model scales. Code and data are available: this https URL
DZ-TDPO:用于长对话中可变状态追踪的非破坏性时间对齐方法 / DZ-TDPO: Non-Destructive Temporal Alignment for Mutable State Tracking in Long-Context Dialogue
这篇论文提出了一种名为DZ-TDPO的新方法,它通过智能调整模型对过去对话的关注方式,让AI助手在长对话中能更灵活地跟踪用户意图的变化,而不会破坏模型原有的通用能力。
源自 arXiv: 2512.03704