中国物理B ›› 2025, Vol. 34 ›› Issue (4): 40204-040204.doi: 10.1088/1674-1056/adbeda
Feng Lin(林峰)1 and Jia-Lin He(何嘉林)1,2,3,†
Feng Lin(林峰)1 and Jia-Lin He(何嘉林)1,2,3,†
摘要: Graph neural networks (GNNs) have demonstrated excellent performance in graph representation learning. However, as the volume of graph data grows, issues related to cost and efficiency become increasingly prominent. Graph distillation methods address this challenge by extracting a smaller, reduced graph, ensuring that GNNs trained on both the original and reduced graphs show similar performance. Existing methods, however, primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs, while neglecting the original graph's structure and redundant nodes. This often results in a loss of critical information within the reduced graph. To overcome this limitation, we propose a graph distillation method guided by network symmetry. Specifically, we identify symmetric nodes with equivalent neighborhood structures and merge them into "super nodes", thereby simplifying the network structure, reducing redundant parameter optimization and enhancing training efficiency. At the same time, instead of relying on the original node features, we employ gradient descent to match optimal features that align with the original features, thus improving downstream task performance. Theoretically, our method guarantees that the reduced graph retains the key information present in the original graph. Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation, exhibiting strong generalization capability and outperforming existing graph reduction methods.
中图分类号: (Combinatorics; graph theory)