中国物理B ›› 2025, Vol. 34 ›› Issue (4): 40204-040204.doi: 10.1088/1674-1056/adbeda

• • 上一篇    下一篇

Graph distillation with network symmetry

Feng Lin(林峰)1 and Jia-Lin He(何嘉林)1,2,3,†   

  1. 1 China West Normal University, Nanchong 637000, China;
    2 The Internet of Things Perception and Big Data Analysis Key Laboratory of Nanchong City, Nanchong 637000, China;
    3 Institute of Artificial Intelligence, China West Normal University, Nanchong 637000, China
  • 收稿日期:2024-11-13 修回日期:2025-02-25 接受日期:2025-03-11 出版日期:2025-04-15 发布日期:2025-04-15
  • 通讯作者: Jia-Lin He E-mail:hejialin32@126.com
  • 基金资助:
    Project supported by the National Natural Science Foundation of China (Grant No. 62176217), the Program from the Sichuan Provincial Science and Technology, China (Grant No. 2018RZ0081), and the Fundamental Research Funds of China West Normal University (Grant No. 17E063).

Graph distillation with network symmetry

Feng Lin(林峰)1 and Jia-Lin He(何嘉林)1,2,3,†   

  1. 1 China West Normal University, Nanchong 637000, China;
    2 The Internet of Things Perception and Big Data Analysis Key Laboratory of Nanchong City, Nanchong 637000, China;
    3 Institute of Artificial Intelligence, China West Normal University, Nanchong 637000, China
  • Received:2024-11-13 Revised:2025-02-25 Accepted:2025-03-11 Online:2025-04-15 Published:2025-04-15
  • Contact: Jia-Lin He E-mail:hejialin32@126.com
  • Supported by:
    Project supported by the National Natural Science Foundation of China (Grant No. 62176217), the Program from the Sichuan Provincial Science and Technology, China (Grant No. 2018RZ0081), and the Fundamental Research Funds of China West Normal University (Grant No. 17E063).

摘要: Graph neural networks (GNNs) have demonstrated excellent performance in graph representation learning. However, as the volume of graph data grows, issues related to cost and efficiency become increasingly prominent. Graph distillation methods address this challenge by extracting a smaller, reduced graph, ensuring that GNNs trained on both the original and reduced graphs show similar performance. Existing methods, however, primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs, while neglecting the original graph's structure and redundant nodes. This often results in a loss of critical information within the reduced graph. To overcome this limitation, we propose a graph distillation method guided by network symmetry. Specifically, we identify symmetric nodes with equivalent neighborhood structures and merge them into "super nodes", thereby simplifying the network structure, reducing redundant parameter optimization and enhancing training efficiency. At the same time, instead of relying on the original node features, we employ gradient descent to match optimal features that align with the original features, thus improving downstream task performance. Theoretically, our method guarantees that the reduced graph retains the key information present in the original graph. Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation, exhibiting strong generalization capability and outperforming existing graph reduction methods.

关键词: graph neural networks, graph distillation, network symmetry, super nodes, feature optimization

Abstract: Graph neural networks (GNNs) have demonstrated excellent performance in graph representation learning. However, as the volume of graph data grows, issues related to cost and efficiency become increasingly prominent. Graph distillation methods address this challenge by extracting a smaller, reduced graph, ensuring that GNNs trained on both the original and reduced graphs show similar performance. Existing methods, however, primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs, while neglecting the original graph's structure and redundant nodes. This often results in a loss of critical information within the reduced graph. To overcome this limitation, we propose a graph distillation method guided by network symmetry. Specifically, we identify symmetric nodes with equivalent neighborhood structures and merge them into "super nodes", thereby simplifying the network structure, reducing redundant parameter optimization and enhancing training efficiency. At the same time, instead of relying on the original node features, we employ gradient descent to match optimal features that align with the original features, thus improving downstream task performance. Theoretically, our method guarantees that the reduced graph retains the key information present in the original graph. Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation, exhibiting strong generalization capability and outperforming existing graph reduction methods.

Key words: graph neural networks, graph distillation, network symmetry, super nodes, feature optimization

中图分类号:  (Combinatorics; graph theory)

  • 02.10.Ox
02.40.Pc (General topology) 07.05.Mh (Neural networks, fuzzy logic, artificial intelligence) 11.30.-j (Symmetry and conservation laws)