📄
Abstract - The WidthWall: A Strict Expressivity Hierarchy for Hypergraph Neural Networks
Hypergraphs provide a natural framework to model higher-order interactions in scientific, social, and biological systems. Hypergraph neural networks (HGNNs) aim to learn from such data, yet it remains unclear which higher-order structures these models can represent. We show that hypergraph expressivity is governed by which small patterns an architecture can detect and count. We formalize this via homomorphism densities, which measure how often a structural motif appears in a hypergraph. Combining classical homomorphism-count completeness with invariant approximation, we show that homomorphism densities generate all continuous hypergraph invariants and organize them into a strict hierarchy indexed by hypertree width. This yields a Width Wall: a fundamental architectural limit beyond which no hidden dimension, training procedure or fixed-depth HGNN can represent invariants requiring wider patterns. Our framework provides a unified characterization of 15 HGNN architectures, precisely identifies information lost by clique expansion, and motivates density-aware models that extend expressivity beyond bounded-width message passing. We experimentally validate this finding on an APPLICATION NODE CLASSIFICATION SUITE of real-world hypergraphs, where the Width Wall predicts when graph-reduction baselines fail and when density features help.
宽度墙:超图神经网络的严格表达能力层级 /
The WidthWall: A Strict Expressivity Hierarchy for Hypergraph Neural Networks
1️⃣ 一句话总结
这篇论文揭示了超图神经网络(HGNN)的表达能力受限于其能检测的局部模式“宽度”(即超树宽),并证明存在一个“宽度墙”——任何固定深度的HGNN都无法越过这一屏障去表示需要更宽模式的复杂结构,研究为15种主流HGNN架构提供了统一的能力标尺,并指出了通过引入密度特征来突破这一限制的方法。