LLM作为图核:重新思考文本丰富图上的消息传递 / LLM as Graph Kernel: Rethinking Message Passing on Text-Rich Graphs
1️⃣ 一句话总结
这篇论文提出了一种名为RAMP的新方法,它将大语言模型(LLM)本身当作一个图结构中的核心聚合算子来使用,直接在文本丰富的图上进行迭代式的原始文本推理和消息传递,从而统一处理判别式和生成式任务,有效提升了性能。
Text-rich graphs, which integrate complex structural dependencies with abundant textual information, are ubiquitous yet remain challenging for existing learning paradigms. Conventional methods and even LLM-hybrids compress rich text into static embeddings or summaries before structural reasoning, creating an information bottleneck and detaching updates from the raw content. We argue that in text-rich graphs, the text is not merely a node attribute but the primary medium through which structural relationships are manifested. We introduce RAMP, a Raw-text Anchored Message Passing approach that moves beyond using LLMs as mere feature extractors and instead recasts the LLM itself as a graph-native aggregation operator. RAMP exploits the text-rich nature of the graph via a novel dual-representation scheme: it anchors inference on each node's raw text during each iteration while propagating dynamically optimized messages from neighbors. It further handles both discriminative and generative tasks under a single unified generative formulation. Extensive experiments show that RAMP effectively bridges the gap between graph propagation and deep text reasoning, achieving competitive performance and offering new insights into the role of LLMs as graph kernels for general-purpose graph learning.
LLM作为图核:重新思考文本丰富图上的消息传递 / LLM as Graph Kernel: Rethinking Message Passing on Text-Rich Graphs
这篇论文提出了一种名为RAMP的新方法,它将大语言模型(LLM)本身当作一个图结构中的核心聚合算子来使用,直接在文本丰富的图上进行迭代式的原始文本推理和消息传递,从而统一处理判别式和生成式任务,有效提升了性能。
源自 arXiv: 2603.14937