菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Lost in Aggregation: On a Fundamental Expressivity Limit of Message-Passing Graph Neural Networks

We define a generic class of functions that captures most conceivable aggregations for Message-Passing Graph Neural Networks (MP-GNNs), and prove that any MP-GNN model with such aggregations induces only a polynomial number of equivalence classes on all graphs - while the number of non-isomorphic graphs is doubly-exponential (in number of vertices). Adding a familiar perspective, we observe that merely 2-iterations of Color Refinement (CR) induce at least an exponential number of equivalence classes, making the aforementioned MP-GNNs relatively infinitely weaker. Previous results state that MP-GNNs match full CR, however they concern a weak, 'non-uniform', notion of distinguishing-power where each graph size may required a different MP-GNN to distinguish graphs up to that size. Our results concern both distinguishing between non-equivariant vertices and distinguishing between non-isomorphic graphs.

顶级标签: theory machine learning model evaluation
详细标签: graph neural networks expressivity message passing color refinement graph isomorphism 或 搜索:

聚合中的迷失:论消息传递图神经网络表达能力的一个根本性限制 / Lost in Aggregation: On a Fundamental Expressivity Limit of Message-Passing Graph Neural Networks


1️⃣ 一句话总结

这篇论文证明,无论采用何种聚合方式,消息传递图神经网络(MP-GNN)区分不同图结构的能力存在根本性上限,其表达能力远弱于经典的图着色算法,无法有效区分数量庞大的非同构图。

源自 arXiv: 2603.14846