菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - Probing Graph Neural Network Activation Patterns Through Graph Topology

Curvature notions on graphs provide a theoretical description of graph topology, highlighting bottlenecks and denser connected regions. Artifacts of the message passing paradigm in Graph Neural Networks, such as oversmoothing and oversquashing, have been attributed to these regions. However, it remains unclear how the topology of a graph interacts with the learned preferences of GNNs. Through Massive Activations, which correspond to extreme edge activation values in Graph Transformers, we probe this correspondence. Our findings on synthetic graphs and molecular benchmarks reveal that MAs do not preferentially concentrate on curvature extremes, despite their theoretical link to information flow. On the Long Range Graph Benchmark, we identify a systemic \textit{curvature shift}: global attention mechanisms exacerbate topological bottlenecks, drastically increasing the prevalence of negative curvature. Our work reframes curvature as a diagnostic probe for understanding when and why graph learning fails.

顶级标签: machine learning theory model evaluation
详细标签: graph neural networks graph topology curvature attention mechanisms activation patterns 或 搜索:

通过图拓扑探测图神经网络激活模式 / Probing Graph Neural Network Activation Patterns Through Graph Topology


1️⃣ 一句话总结

这项研究发现,图神经网络在训练过程中并不会如理论预期那样特别关注图中连接紧密或稀疏的关键区域,反而其全局注意力机制会加剧网络中的信息瓶颈,导致学习效果变差,因此作者提出可以用图的曲率作为一种诊断工具来预测和理解图学习失败的原因。

源自 arXiv: 2602.21092