Lattice map spiking neural networks (LM-SNNs) for clustering and classifying image data
- PDF / 2,889,311 Bytes
- 24 Pages / 439.642 x 666.49 pts Page_size
- 36 Downloads / 154 Views
Lattice map spiking neural networks (LM-SNNs) for clustering and classifying image data Hananel Hazan1,2 Robert Kozma2,3
· Daniel J. Saunders2 · Darpan T. Sanghavi2 · Hava Siegelmann2 ·
© Springer Nature Switzerland AG 2019
Abstract Spiking neural networks (SNNs) with a lattice architecture are introduced in this work, combining several desirable properties of SNNs and self-organized maps (SOMs). Networks are trained with biologically motivated, unsupervised learning rules to obtain a self-organized grid of filters via cooperative and competitive excitatory-inhibitory interactions. Several inhibition strategies are developed and tested, such as (i) incrementally increasing inhibition level over the course of network training, and (ii) switching the inhibition level from low to high (two-level) after an initial training segment. During the labeling phase, the spiking activity generated by data with known labels is used to assign neurons to categories of data, which are then used to evaluate the network’s classification ability on a held-out set of test data. Several biologically plausible evaluation rules are proposed and compared, including a population-level confidence rating, and an n-gram inspired method. The effectiveness of the proposed self-organized learning mechanism is tested using the MNIST benchmark dataset, as well as using images produced by playing the Atari Breakout game. Keywords Spiking neural networks (SNN) · Self-Organized Maps (SOMs) · Self clustering · Online learning · Robostness · Unsupervised learning · Winner-take-all classification Mathematics Subject Classification (2010) 68T05 · 68T10 · 68T04 · 68T99
1 Introduction Today’s dominant artificial intelligence (AI) approach uses deep neural networks (DNNs), which are based on global gradient-descent learning algorithms [1, 2], wherein a loss Hananel Hazan
[email protected] 1
College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, USA
2
Biologically-Inspired Neural & Dynamical Systems Laboratory (BINDS), University of Massachusetts Amherst, Amherst, MA 01003, USA
3
Center for Large-Scale Intelligent Optimization & Networks (CLION), University of Memphis, Memphis, TN 38152, USA
H. Hazan et al.
function is defined and all DNN parameters are updated using approximate derivatives to minimize it. The success of this approach is based on employing massive amounts of data to train the DNNs, which requires significant computational resources [3, 4] provided by the exponentially increasing power of prolific supercomputing facilities world-wide. In the case of certain practical problems, however, one may not have a sufficiently large dataset to adequately cover the problem space, or one may need to make decisions quickly without waiting for an expensive training process. There are several proposed approaches to overcome the computational constraints of deep learning (DL), one of which is based on using local learning rules invoking neuro-biologically motivated spike-timing-dependent plasticity (STDP
Data Loading...