基础与组合性:论神经符号系统中推理的非互补性 / Grounding vs. Compositionality: On the Non-Complementarity of Reasoning in Neuro-Symbolic Systems
1️⃣ 一句话总结
本文通过提出可微的迭代逻辑张量网络架构,首次系统性地实验证明,在神经符号系统中,符号基础(将符号与感知数据关联)并不能自然产生组合推理能力,推理必须通过显式的学习目标来训练,而非自动涌现。
Compositional generalization remains a foundational weakness of modern neural networks, limiting their robustness and applicability in domains requiring out-of-distribution reasoning. A central, yet unverified, assumption in neuro-symbolic AI is that compositional reasoning will emerge as a byproduct of successful symbol grounding. This work presents the first systematic empirical analysis to challenge this assumption by disentangling the contributions of grounding and reasoning. To operationalize this investigation, we introduce the Iterative Logic Tensor Network ($i$LTN), a fully differentiable architecture designed for multi-step deduction. Using a formal taxonomy of generalization -- probing for novel entities, unseen relations, and complex rule compositions -- we demonstrate that a model trained solely on a grounding objective fails to generalize. In contrast, our full $i$LTN, trained jointly on perceptual grounding and multi-step reasoning, achieves high zero-shot accuracy across all tasks. Our findings provide conclusive evidence that symbol grounding, while necessary, is insufficient for generalization, establishing that reasoning is not an emergent property but a distinct capability that requires an explicit learning objective.
基础与组合性:论神经符号系统中推理的非互补性 / Grounding vs. Compositionality: On the Non-Complementarity of Reasoning in Neuro-Symbolic Systems
本文通过提出可微的迭代逻辑张量网络架构,首次系统性地实验证明,在神经符号系统中,符号基础(将符号与感知数据关联)并不能自然产生组合推理能力,推理必须通过显式的学习目标来训练,而非自动涌现。
源自 arXiv: 2604.26521