Tunability of auto resonance network
- PDF / 1,629,294 Bytes
- 7 Pages / 595.276 x 790.866 pts Page_size
- 114 Downloads / 165 Views
Tunability of auto resonance network V. M. Aparanji1 · Uday V. Wali2 · R. Aparna3 Received: 19 October 2019 / Accepted: 9 April 2020 / Published online: 18 April 2020 © Springer Nature Switzerland AG 2020
Abstract This paper proposes a new type of Artificial Neural Network called Auto-Resonance Network (ARN) derived from synergistic control of biological joints. The network can be tuned to any real valued input without any degradation of learning rate. Neuronal density of the network is low and grows at a linear or low order polynomial rate with input classification. Input coverage of the neuron can be tuned dynamically to match properties of input data. ARN can be used as a part of hierarchical structures to support deep learning applications. Keywords Artificial neural network · Auto resonance network · Self organizing maps · Adaptive resonance theory
1 Introduction Classical neural networks suffer from size and temporal superposition, generally called stability-plasticity dilemma in Artificial Neural Network (ANN) literature [1–3]. Neuroscience studies describe these effects as the binding problem and superposition catastrophe [4]. Some researchers take this a step further and state that the electrical oscillations in the biological sensory systems trigger the cells in a sequence, effectively serializing certain recognition activity in time, adding a new Degree of Freedom (DoF) to the biological recognition engine [5]. As the neural cells in a network do not increase over time, the size of the initial neural network has to be large enough to accommodate the likely size of knowledge that will be acquired over time. Each learning experience has to bind to a subset of the existing neural infrastructure. However, as the knowledge base increases, newer knowledge has to superimpose on existing infrastructure, possibly fragmenting the existing subsets. Old knowledge is replaced or distorted by new knowledge, effectively destabilizing the established subsets. This may not always have a damaging effect but does distort/refine old subsets.
Kohonen networks called the Self Organizing Maps (SOM) start with pre defined set of nodes initialized to random weights [6]. As the input is applied, some of the input nodes produce maximum output and one among them will be chosen as winner. The key to Kohonen’s networks is that the neighbors of the winner node adjust their weights towards that of the winner. Over period of time, repetition of this process creates a neighborhood of nodes that recognize similar inputs. Each of such neighborhoods represents one class of input. As the neighborhood is not constrained to be convex, it should be possible to support nonlinear classification. If the number of classes of inputs exceeds a certain number in relation to the total number of nodes in the network, the neighborhoods have to split and merge to accommodate new classes. Therefore, SOMs are subject to superposition catastrophe. Notice that they do not suffer from the binding problem as data classes are associated with sets of nodes. O
Data Loading...