菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-11
📄 Abstract - Dynamic Knowledge Fusion for Multi-Domain Dialogue State Tracking

The performance of task-oriented dialogue models is strongly tied to how well they track dialogue states, which records and updates user information across multi-turn interactions. However, current multi-domain DST encounters two key challenges: the difficulty of effectively modeling dialogue history and the limited availability of annotated data, both of which hinder model performance. To tackle the aforementioned problems, we develop a dynamic knowledge fusion framework applicable to multi-domain DST. The model operates in two stages: first, an encoder-only network trained with contrastive learning encodes dialogue history and candidate slots, selecting relevant slots based on correlation scores; second, dynamic knowledge fusion leverages the structured information of selected slots as contextual prompts to enhance the accuracy and consistency of dialogue state tracking. This design enables more accurate integration of dialogue context and domain knowledge. Results obtained from multi-domain dialogue benchmarks indicate that our method notably improves both tracking accuracy and generalization, validating its capability in handling complex dialogue scenarios.

顶级标签: natural language processing agents systems
详细标签: dialogue state tracking multi-domain knowledge fusion contrastive learning task-oriented dialogue 或 搜索:

面向多领域对话状态跟踪的动态知识融合方法 / Dynamic Knowledge Fusion for Multi-Domain Dialogue State Tracking


1️⃣ 一句话总结

这篇论文提出了一种动态知识融合框架,通过对比学习筛选相关对话槽位并利用其结构化信息作为提示,有效提升了多领域对话状态跟踪的准确性和泛化能力。

源自 arXiv: 2603.10367