Fixed-time synchronization for competitive neural networks with Gaussian-wavelet-type activation functions and discrete
- PDF / 550,510 Bytes
- 16 Pages / 439.37 x 666.142 pts Page_size
- 61 Downloads / 211 Views
Fixed-time synchronization for competitive neural networks with Gaussian-wavelet-type activation functions and discrete delays Jie Zhou1 · Haibo Bao1 Received: 16 February 2020 © Korean Society for Informatics and Computational Applied Mathematics 2020
Abstract In this article, the fixed-time synchronization for competitive neural networks (CNNs) with Gaussian-wavelet-type activation functions and discrete delays is investigated. Firstly, in terms of Lyapunov stability theory and inequality technique, simple synchronization conditions are obtained by designing some feedback controllers. Secondly, the activation functions adopted in CNNs are Gaussian-wavelet-type activation functions for the first time, which have great preponderance in network optimization and storage capacity. Furthermore, the settling time with upper bound of the system to achieve fixed-time synchronization can be explicitly evaluated, which is irrelevant to the initial value of the system. Finally, the theoretical results which we derived are attested to be indeed feasible in terms of two numerical simulations. Keywords Fixed-time synchronization · Competitive neural networks · Gaussian-wavelet-type activation functions · Discrete delays Mathematics Subject Classification 92B20
1 Introduction During the past decades, the research of neural networks has reached an unprecedented upsurge because of the extensive application of neural networks in a wide range of territories, such as pattern recognition, signal processing and associative memory [1– 5]. In particular, CNNs attract a myriad of scholar’s interest [6–8]. We understand that lateral inhibitory neural networks with deterministic Hebbian learning laws can simulate the dynamics of the cognitive map of the cerebral cortex without being supervised synaptic modification. Subsequently, Cohen and Grossberg [9] proposed
B 1
Haibo Bao [email protected] School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
123
J. Zhou, H. Bao
the CNNs model to simulate cell inhibition in neurobiology. Meyer-Bäse [10] put forward the CNNs with different indices at the same time. CNNs are unsupervised learning neural networks, which can simulate biological neural network system to conduct information processing by means of excitability, coordination, inhibition and competition between neurons. The input node and output node of the neural networks are completely connected, which have the characteristics of simple structure, fast operation speed and simple learning algorithm. In competitive neural networks, when one neuron is excited, it will inhibit other neurons through its branches, which can cause competition between neurons. When more than one neuron is suppressed, the most excited neurons get rid of the inhibition of other neurons to emerge as the winner of the competition. Competitive neural networks have two state models: one state describes the dynamical behavior of neural network state changes, which is frequent and neural network is active, and the corresponding memory mode
Data Loading...