菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - Exploring the impact of adaptive rewiring in Graph Neural Networks

This paper explores sparsification methods as a form of regularization in Graph Neural Networks (GNNs) to address high memory usage and computational costs in large-scale graph applications. Using techniques from Network Science and Machine Learning, including Erdős-Rényi for model sparsification, we enhance the efficiency of GNNs for real-world applications. We demonstrate our approach on N-1 contingency assessment in electrical grids, a critical task for ensuring grid reliability. We apply our methods to three datasets of varying sizes, exploring Graph Convolutional Networks (GCN) and Graph Isomorphism Networks (GIN) with different degrees of sparsification and rewiring. Comparison across sparsification levels shows the potential of combining insights from both research fields to improve GNN performance and scalability. Our experiments highlight the importance of tuning sparsity parameters: while sparsity can improve generalization, excessive sparsity may hinder learning of complex patterns. Our adaptive rewiring approach, particularly when combined with early stopping, proves promising by allowing the model to adapt its connectivity structure during training. This research contributes to understanding how sparsity can be effectively leveraged in GNNs for critical applications like power grid reliability analysis.

顶级标签: machine learning systems model training
详细标签: graph neural networks sparsification adaptive rewiring regularization power grid analysis 或 搜索:

探索自适应重连在图神经网络中的影响 / Exploring the impact of adaptive rewiring in Graph Neural Networks


1️⃣ 一句话总结

这篇论文研究了通过自适应重连等稀疏化方法来优化图神经网络,使其在保持性能的同时降低计算成本,并在电网可靠性分析等关键任务中验证了该方法的有效性。

源自 arXiv: 2602.10754