菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - Breaking Symmetry Bottlenecks in GNN Readouts

Graph neural networks (GNNs) are widely used for learning on structured data, yet their ability to distinguish non-isomorphic graphs is fundamentally limited. These limitations are usually attributed to message passing; in this work we show that an independent bottleneck arises at the readout stage. Using finite-dimensional representation theory, we prove that all linear permutation-invariant readouts, including sum and mean pooling, factor through the Reynolds (group-averaging) operator and therefore project node embeddings onto the fixed subspace of the permutation action, erasing all non-trivial symmetry-aware components regardless of encoder expressivity. This yields both a new expressivity barrier and an interpretable characterization of what global pooling preserves or destroys. To overcome this collapse, we introduce projector-based invariant readouts that decompose node representations into symmetry-aware channels and summarize them with nonlinear invariant statistics, preserving permutation invariance while retaining information provably invisible to averaging. Empirically, swapping only the readout enables fixed encoders to separate WL-hard graph pairs and improves performance across multiple benchmarks, demonstrating that readout design is a decisive and under-appreciated factor in GNN expressivity.

顶级标签: machine learning theory model training
详细标签: graph neural networks symmetry expressivity pooling invariant readouts 或 搜索:

打破图神经网络读出层的对称性瓶颈 / Breaking Symmetry Bottlenecks in GNN Readouts


1️⃣ 一句话总结

这篇论文发现并解决了图神经网络中一个被忽视的瓶颈:传统的线性读出层(如求和或平均池化)会抹去节点特征中所有与图结构对称性相关的关键信息,作者通过引入一种新的非线性读出方法,在保持模型不变性的同时显著提升了模型区分不同图结构的能力。

源自 arXiv: 2602.05950