菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-07
📄 Abstract - AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

Large Language Models (LLMs) increasingly rely on agentic capabilities-iterative retrieval, tool use, and decision-making-to overcome the limits of static, parametric knowledge. Yet existing agentic frameworks treat external information as unstructured text and fail to leverage the topological dependencies inherent in real-world data. To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference. Specifically, we propose AgentGL, the first reinforcement learning (RL)-driven framework for AGL. AgentGL equips an LLM agent with graph-native tools for multi-scale exploration, regulates tool usage via search-constrained thinking to balance accuracy and efficiency, and employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without step-wise supervision. Across diverse Text-Attributed Graph (TAG) benchmarks and multiple LLM backbones, AgentGL substantially outperforms strong GraphLLMs and GraphRAG baselines, achieving absolute improvements of up to 17.5% in node classification and 28.4% in link prediction. These results demonstrate that AGL is a promising frontier for enabling LLMs to autonomously navigate and reason over complex relational environments. The code is publicly available at this https URL.

顶级标签: llm agents graph learning
详细标签: graph learning reinforcement learning tool usage node classification link prediction 或 搜索:

AgentGL:通过强化学习实现基于大语言模型的自主图学习 / AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一个名为AgentGL的新框架,它利用强化学习来指导大语言模型像智能体一样,在具有复杂关系的图数据中自主导航和推理,从而显著提升了节点分类和链接预测等任务的性能。

源自 arXiv: 2604.05846