Spiking Neural Networks: Background, Recent Development and the NeuCube Architecture
- PDF / 1,410,700 Bytes
- 27 Pages / 439.37 x 666.142 pts Page_size
- 104 Downloads / 304 Views
Spiking Neural Networks: Background, Recent Development and the NeuCube Architecture Clarence Tan1 · Marko Šarlija2
· Nikola Kasabov1
© Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract This paper reviews recent developments in the still-off-the-mainstream information and data processing area of spiking neural networks (SNN)—the third generation of artificial neural networks. We provide background information about the functioning of biological neurons, discussing the most important and commonly used mathematical neural models. Most relevant information processing techniques, learning algorithms, and applications of spiking neurons are described and discussed, focusing on feasibility and biological plausibility of the methods. Specifically, we describe in detail the functioning and organization of the latest version of a 3D spatio-temporal SNN-based data machine framework called NeuCube, as well as it’s SNN-related submodules. All described submodules are accompanied with formal algorithmic formulations. The architecture is highly relevant for the analysis and interpretation of various types of spatio-temporal brain data (STBD), like EEG, NIRS, fMRI, but we highlight some of the recent both STBD- and non-STBD-based applications. Finally, we summarise and discuss some open research problems that can be addressed in the future. These include, but are not limited to: application in the area of EEG-based BCI through transfer learning; application in the area of affective computing through the extension of the NeuCube framework which would allow for a biologically plausible SNN-based integration of central and peripheral nervous system measures. Matlab implementation of the NeuCube’s SNN-related module is available for research and teaching purposes. Keywords Artificial neural networks · Spiking neural networks · Spike encoding · Spike-timing dependent plasticity · Spatio-temporal brain data · NeuCube
B
Marko Šarlija [email protected] Clarence Tan [email protected] Nikola Kasabov [email protected]
1
Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Private Bag 92006, Auckland 1010, New Zealand
2
Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia
123
C. Tan et al.
1 Introduction In the parlance of machine learning and artificial intelligence (AI), a neural network can be defined as a network of neurons that are able to perform computations and solve problems. Neural networks for learning as seen today, have come a long way since their discovery in the late 50s by Hubel and Wiesel [1] followed by the development of Neocognitron, a neural network composed of multiple layers, by Fukushima in the early 80s [2]. Depending on the type of neurons used, artificial neural networks (ANNs), as they are referred to, can be thought to belong to three generations. First-generation ANNs are composed of neurons that compute a weighted sum of binary inputs and produce an output of 1 if the sum crosses a pre
Data Loading...