菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - RADAR: Reasoning as Discrimination with Aligned Representations for LLM-based Knowledge Graph Reasoning

Knowledge graph reasoning (KGR) infers missing facts, with recent advances increasingly harnessing the semantic priors and reasoning abilities of Large Language Models (LLMs). However, prevailing generative paradigms are prone to memorizing surface-level co-occurrences rather than learning genuine relational semantics, limiting out-of-distribution generalization. To address this, we propose RADAR, which reformulates KGR from generative pattern matching to discriminative relational reasoning. We recast KGR as discriminative entity selection, where reinforcement learning enforces relative entity separability beyond token-likelihood imitation. Leveraging this separability, inference operates directly in representation space, ensuring consistency with the discriminative optimization and bypassing generation-induced hallucinations. Across four benchmarks, RADAR achieves 5-6% relative gains on link prediction and triple classification over strong LLM baselines, while increasing task-relevant mutual information in intermediate representations by 62.9%, indicating more robust and transferable relational reasoning.

顶级标签: llm natural language processing machine learning
详细标签: knowledge graph reasoning discriminative learning representation alignment link prediction reinforcement learning 或 搜索:

RADAR:基于对齐表征的判别式推理用于大语言模型的知识图谱推理 / RADAR: Reasoning as Discrimination with Aligned Representations for LLM-based Knowledge Graph Reasoning


1️⃣ 一句话总结

这篇论文提出了一种名为RADAR的新方法,它将知识图谱推理从传统的生成式匹配转变为判别式关系推理,通过强化学习让模型学会区分不同实体,从而在表示空间中直接进行更准确、更可靠的推理,有效提升了泛化能力并减少了幻觉问题。

源自 arXiv: 2602.21951