菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-13
📄 Abstract - Decoupled and Divergence-Conditioned Prompt for Multi-domain Dynamic Graph Foundation Models

Dynamic graphs are ubiquitous in real-world systems, and building generalizable dynamic Graph Foundation Models has become a frontier in graph learning. However, dynamic graphs from different domains pose fundamental challenges to unified modeling, as their semantic and temporal patterns are inherently inconsistent, making the multi-domain pre-training difficult. Consequently, the widely used "pretrain-then-finetune" paradigm often suffers from severe negative knowledge transfer. To the best of our knowledge, there exists no multi-domain dynamic GFM. In this work, we propose DyGFM, a Dynamic Graph Foundation Model over multiple domains based on decoupled and divergence-conditioned prompting. To disentangle transferable semantics from the domain-specific dynamics, we introduce a dual-branch pre-training strategy with semantic-temporal decoupling. To alleviate negative transfer during domain adaptation, we further develop a cross-domain routing mechanism with divergence-aware expert selection. To enable efficient downstream fine-tuning, we design a divergence-conditioned prompt generator that injects lightweight, learnable graph prompts tailored to semantic and temporal traits. Extensive experiments on continuous dynamic graph benchmarks demonstrate that DyGFM consistently outperforms 12 state-of-the-art baselines on both node classification and link prediction tasks, achieving superior effectiveness and efficiency.

顶级标签: machine learning graph learning
详细标签: dynamic graphs foundation model multi-domain prompt engineering negative transfer 或 搜索:

基于解耦与散度条件提示的多领域动态图基础模型 / Decoupled and Divergence-Conditioned Prompt for Multi-domain Dynamic Graph Foundation Models


1️⃣ 一句话总结

本文提出一种名为DyGFM的动态图基础模型,通过将语义和时间特征解耦、并利用基于领域差异的专家选择与自适应提示生成技术,解决了不同领域动态图数据在统一预训练和微调时容易出现的知识冲突与性能下降问题。

源自 arXiv: 2605.13540