中国物理B ›› 2023, Vol. 32 ›› Issue (6): 68704-068704.doi: 10.1088/1674-1056/acb9f6

• • 上一篇    下一篇

A progressive surrogate gradient learning for memristive spiking neural network

Shu Wang(王姝), Tao Chen(陈涛), Yu Gong(龚钰), Fan Sun(孙帆), Si-Yuan Shen(申思远), Shu-Kai Duan(段书凯), and Li-Dan Wang(王丽丹)   

  1. College of Artificial Intelligence, Southwest University, Chongqing 400715, China
  • 收稿日期:2022-11-13 修回日期:2023-01-07 接受日期:2023-02-08 出版日期:2023-05-17 发布日期:2023-05-24
  • 通讯作者: Shu-Kai Duan E-mail:duansk@swu.edu.cn
  • 基金资助:
    Project supported by the Natural Science Foundation of Chongqing (Grant No. cstc2021jcyj-msxmX0565), the Fundamental Research Funds for the Central Universities (Grant No. SWU021002), and the Graduate Research Innovation Project of Chongqing (Grant No. CYS22242).

A progressive surrogate gradient learning for memristive spiking neural network

Shu Wang(王姝), Tao Chen(陈涛), Yu Gong(龚钰), Fan Sun(孙帆), Si-Yuan Shen(申思远), Shu-Kai Duan(段书凯), and Li-Dan Wang(王丽丹)   

  1. College of Artificial Intelligence, Southwest University, Chongqing 400715, China
  • Received:2022-11-13 Revised:2023-01-07 Accepted:2023-02-08 Online:2023-05-17 Published:2023-05-24
  • Contact: Shu-Kai Duan E-mail:duansk@swu.edu.cn
  • Supported by:
    Project supported by the Natural Science Foundation of Chongqing (Grant No. cstc2021jcyj-msxmX0565), the Fundamental Research Funds for the Central Universities (Grant No. SWU021002), and the Graduate Research Innovation Project of Chongqing (Grant No. CYS22242).

摘要: In recent years, spiking neural networks (SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information. However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.

关键词: spiking neural network, surrogate gradient, supervised learning, memristor cross array

Abstract: In recent years, spiking neural networks (SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information. However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.

Key words: spiking neural network, surrogate gradient, supervised learning, memristor cross array

中图分类号:  (Neuroscience)

  • 87.19.L-
87.19.ll (Models of single neurons and networks) 89.20.Ff (Computer science and technology)