菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-02
📄 Abstract - Toward Graph-Tokenizing Large Language Models with Reconstructive Graph Instruction Tuning

The remarkable success of large language models (LLMs) has motivated researchers to adapt them as universal predictors for various graph-related tasks, with the ultimate goal of developing a graph foundation model that generalizes diverse scenarios. The key challenge is to align graph data with language spaces so that LLMs can better comprehend graphs. As a popular paradigm, Graph-Tokenizing LLMs (GTokenLLMs) encode complex structures and lengthy texts into a graph token sequence, and then align them with text tokens via language instructions tuning. Despite their initial success, our information-theoretic analysis reveals that existing GTokenLLMs rely solely on text supervision from language instructions, which achieve only implicit graph-text alignment, resulting in a text-dominant bias that underutilizes graph context. To overcome this limitation, we first prove that the alignment objective is upper-bounded by the mutual information between the input graphs and their hidden representations in the LLM, which motivates us to improve this upper bound to achieve better alignment. To this end, we further propose a reconstructive graph instruction tuning pipeline, RGLM. Our key idea is to reconstruct the graph information from the LLM's graph token outputs, explicitly incorporating graph supervision to constrain the alignment process. Technically, we embody RGLM by exploring three distinct variants from two complementary perspectives: RGLM-Decoder from the input space; RGLM-Similarizer and RGLM-Denoiser from the latent space. Additionally, we theoretically analyze the alignment effectiveness of each variant. Extensive experiments on various benchmarks and task scenarios validate the effectiveness of the proposed RGLM, paving the way for new directions in GTokenLLMs' alignment research.

顶级标签: llm multi-modal model training
详细标签: graph-tokenizing instruction tuning graph-text alignment reconstructive learning graph foundation model 或 搜索:

迈向基于重构式图指令微调的图-标记化大语言模型 / Toward Graph-Tokenizing Large Language Models with Reconstructive Graph Instruction Tuning


1️⃣ 一句话总结

这篇论文提出了一种名为RGLM的新方法,通过让大语言模型在理解图数据后尝试重建图信息,从而更有效地将复杂的图结构与文本对齐,解决了现有方法过度依赖文本而忽略图本身信息的偏差问题。

源自 arXiv: 2603.01385