Efficient and hardware-friendly methods to implement competitive learning for spiking neural networks

  • PDF / 2,327,517 Bytes
  • 12 Pages / 595.276 x 790.866 pts Page_size
  • 69 Downloads / 194 Views

DOWNLOAD

REPORT


(0123456789().,-volV)(0123456789(). ,- volV)

ORIGINAL ARTICLE

Efficient and hardware-friendly methods to implement competitive learning for spiking neural networks Lianhua Qu1



Zhenyu Zhao1 • Lei Wang1 • Yong Wang1

Received: 8 August 2019 / Accepted: 23 January 2020 Ó Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract Spiking neural network (SNN) trained by spike-timing-dependent plasticity (STDP) is a promising computing paradigm for energy-efficient artificial intelligence systems. During the learning procedure of SNN trained by STDP, another two bioinspired mechanisms of lateral inhibition and homeostasis are usually implemented to achieve competitive learning. However, the previous methods to implement lateral inhibition and homeostasis are not designed with hardware in mind, resulting in solutions that are not efficient for deployment on neuromorphic hardware. For example, the existing lateral inhibition methods induce a great number of connections that are proportional to the square of the number of learning neurons. The classical homeostasis methods depend on the fine-tuned membrane threshold with no hardware solution provided. In this paper, we propose two hardware-friendly and scalable methods to achieve lateral inhibition and homeostasis. Using only one inhibitory neuron for one learning layer, our proposed lateral inhibition method can reduce inhibitory connection number from N 2 to 2N and hardware overhead by sharing refractory control circuits. Utilizing the adaptive resistance of memristor, we propose a novel homeostasis method through adapting the leaky current of spiking neurons. In addition, the learning efficiency of different homeostasis methods are studied for the first time by simulating on the cognitive task of digital recognition of MNIST dataset. Simulation results show that our proposed homeostasis method can improve the learning efficiency by 30–50% while maintaining the state-of-the-art performance. Keywords Spiking neural network  Homeostasis  Lateral inhibition  Competitive learning  Memristor

1 Introduction Spiking neural networks (SNNs) are the third-generation artificial neural networks (ANNs) that aim to achieve artificial intelligence by mimicking the computation and learning methods of brain [1, 2]. Compared with the second generation of ANNs, the deep neural networks (DNNs), & Lianhua Qu [email protected] Zhenyu Zhao [email protected] Lei Wang [email protected] Yong Wang [email protected] 1

College of Computer Science and Technology, National University of Defense Technology, Deya Street 109, Changsha 410073, China

SNNs are more powerful in terms of energy efficiency and robustness [3–5]. Previous studies have shown that SNNs can be trained by spike-timing-dependent plasticity (STDP) to extract features in an unsupervised manner [6, 7]. Only based on the spike timing relation between two connected neurons, STDP can be computed in a local event-driven manner, which is intrinsic energy-efficient when implemented on hardware [8, 9]. SNNs tra