菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - KG-Reasoner: A Reinforced Model for End-to-End Multi-Hop Knowledge Graph Reasoning

Large Language Models (LLMs) exhibit strong abilities in natural language understanding and generation, yet they struggle with knowledge-intensive reasoning. Structured Knowledge Graphs (KGs) provide an effective form of external knowledge representation and have been widely used to enhance performance in classical Knowledge Base Question Answering (KBQA) tasks. However, performing precise multi-hop reasoning over KGs for complex queries remains highly challenging. Most existing approaches decompose the reasoning process into a sequence of isolated steps executed through a fixed pipeline. While effective to some extent, such designs constrain reasoning flexibility and fragment the overall decision process, often leading to incoherence and the loss of critical intermediate information from earlier steps. In this paper, we introduce KG-Reasoner, an end-to-end framework that integrates multi-step reasoning into a unified "thinking" phase of a Reasoning LLM. Through Reinforcement Learning (RL), the LLM is trained to internalize the KG traversal process, enabling it to dynamically explore reasoning paths, and perform backtracking when necessary. Experiments on eight multi-hop and knowledge-intensive reasoning benchmarks demonstrate that KG-Reasoner achieves competitive or superior performance compared to the state-of-the-art methods. Codes are available at the repository: this https URL.

顶级标签: llm agents natural language processing
详细标签: knowledge graph reasoning reinforcement learning multi-hop reasoning knowledge base question answering end-to-end training 或 搜索:

KG-Reasoner:一个用于端到端多跳知识图谱推理的强化学习模型 / KG-Reasoner: A Reinforced Model for End-to-End Multi-Hop Knowledge Graph Reasoning


1️⃣ 一句话总结

这篇论文提出了一个名为KG-Reasoner的端到端框架,它通过强化学习训练大语言模型,使其能够像人一样在知识图谱中进行动态、连贯的多步推理,从而有效解决了复杂查询的推理难题。

源自 arXiv: 2604.12487